2026-01-10 13:43:17.615066 | Job console starting 2026-01-10 13:43:17.632648 | Updating git repos 2026-01-10 13:43:17.748070 | Cloning repos into workspace 2026-01-10 13:43:18.226773 | Restoring repo states 2026-01-10 13:43:18.274906 | Merging changes 2026-01-10 13:43:18.987507 | Checking out repos 2026-01-10 13:43:19.395629 | Preparing playbooks 2026-01-10 13:43:20.218333 | Running Ansible setup 2026-01-10 13:43:27.298947 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-10 13:43:28.770885 | 2026-01-10 13:43:28.771086 | PLAY [Base pre] 2026-01-10 13:43:28.790030 | 2026-01-10 13:43:28.790193 | TASK [Setup log path fact] 2026-01-10 13:43:28.811299 | orchestrator | ok 2026-01-10 13:43:28.831862 | 2026-01-10 13:43:28.832074 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 13:43:28.862356 | orchestrator | ok 2026-01-10 13:43:28.875322 | 2026-01-10 13:43:28.875454 | TASK [emit-job-header : Print job information] 2026-01-10 13:43:28.937616 | # Job Information 2026-01-10 13:43:28.937920 | Ansible Version: 2.16.14 2026-01-10 13:43:28.938006 | Job: testbed-deploy-current-in-a-nutshell-ubuntu-24.04 2026-01-10 13:43:28.938048 | Pipeline: label 2026-01-10 13:43:28.938075 | Executor: 521e9411259a 2026-01-10 13:43:28.938096 | Triggered by: https://github.com/osism/testbed/pull/2818 2026-01-10 13:43:28.938118 | Event ID: 4a28f280-ee2a-11f0-88a7-9820c87091e7 2026-01-10 13:43:28.946935 | 2026-01-10 13:43:28.947104 | LOOP [emit-job-header : Print node information] 2026-01-10 13:43:29.115758 | orchestrator | ok: 2026-01-10 13:43:29.116075 | orchestrator | # Node Information 2026-01-10 13:43:29.116116 | orchestrator | Inventory Hostname: orchestrator 2026-01-10 13:43:29.116143 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-10 13:43:29.116165 | orchestrator | Username: zuul-testbed04 2026-01-10 13:43:29.116187 | orchestrator | Distro: Debian 12.12 2026-01-10 13:43:29.116218 | orchestrator | Provider: static-testbed 2026-01-10 13:43:29.116244 | orchestrator | Region: 2026-01-10 13:43:29.116265 | orchestrator | Label: testbed-orchestrator 2026-01-10 13:43:29.116286 | orchestrator | Product Name: OpenStack Nova 2026-01-10 13:43:29.116306 | orchestrator | Interface IP: 81.163.193.140 2026-01-10 13:43:29.143987 | 2026-01-10 13:43:29.144150 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:30.055392 | orchestrator -> localhost | changed 2026-01-10 13:43:30.063811 | 2026-01-10 13:43:30.063993 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-10 13:43:31.677876 | orchestrator -> localhost | changed 2026-01-10 13:43:31.695156 | 2026-01-10 13:43:31.695306 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-10 13:43:32.069590 | orchestrator -> localhost | ok 2026-01-10 13:43:32.077662 | 2026-01-10 13:43:32.077815 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-10 13:43:32.133405 | orchestrator | ok 2026-01-10 13:43:32.164508 | orchestrator | included: /var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-10 13:43:32.175622 | 2026-01-10 13:43:32.175757 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-10 13:43:33.621278 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-10 13:43:33.621522 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/20a7f85b13b844a5ba4884b904239a95_id_rsa 2026-01-10 13:43:33.621562 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/20a7f85b13b844a5ba4884b904239a95_id_rsa.pub 2026-01-10 13:43:33.621589 | orchestrator -> localhost | The key fingerprint is: 2026-01-10 13:43:33.621617 | orchestrator -> localhost | SHA256:vunA+Cf1KzfEBF6/8fvHiLTdUEiX6ghSV4DQK3migeM zuul-build-sshkey 2026-01-10 13:43:33.621641 | orchestrator -> localhost | The key's randomart image is: 2026-01-10 13:43:33.621680 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-10 13:43:33.621703 | orchestrator -> localhost | | .o ..o. .| 2026-01-10 13:43:33.621725 | orchestrator -> localhost | | .+.. ...| 2026-01-10 13:43:33.621746 | orchestrator -> localhost | | . .ooo. ..o | 2026-01-10 13:43:33.621766 | orchestrator -> localhost | | o . =.+. o.. .| 2026-01-10 13:43:33.621785 | orchestrator -> localhost | | . . o So. o+ . | 2026-01-10 13:43:33.621814 | orchestrator -> localhost | | E + .. o.o.o | 2026-01-10 13:43:33.621836 | orchestrator -> localhost | | . o..o . + * | 2026-01-10 13:43:33.621860 | orchestrator -> localhost | | ...oo+ o + +| 2026-01-10 13:43:33.621887 | orchestrator -> localhost | | .++o.o .o| 2026-01-10 13:43:33.621908 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-10 13:43:33.622008 | orchestrator -> localhost | ok: Runtime: 0:00:00.791538 2026-01-10 13:43:33.629902 | 2026-01-10 13:43:33.630089 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-10 13:43:33.685737 | orchestrator | ok 2026-01-10 13:43:33.717233 | orchestrator | included: /var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-10 13:43:33.772434 | 2026-01-10 13:43:33.772637 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-10 13:43:33.819440 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:33.839911 | 2026-01-10 13:43:33.840130 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-10 13:43:34.861924 | orchestrator | changed 2026-01-10 13:43:34.874226 | 2026-01-10 13:43:34.874373 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-10 13:43:35.160112 | orchestrator | ok 2026-01-10 13:43:35.186125 | 2026-01-10 13:43:35.186375 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-10 13:43:35.610352 | orchestrator | ok 2026-01-10 13:43:35.619665 | 2026-01-10 13:43:35.619812 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-10 13:43:36.063710 | orchestrator | ok 2026-01-10 13:43:36.070447 | 2026-01-10 13:43:36.070582 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-10 13:43:36.105383 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:36.114305 | 2026-01-10 13:43:36.114445 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-10 13:43:36.984326 | orchestrator -> localhost | changed 2026-01-10 13:43:37.014941 | 2026-01-10 13:43:37.015138 | TASK [add-build-sshkey : Add back temp key] 2026-01-10 13:43:37.471890 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/20a7f85b13b844a5ba4884b904239a95_id_rsa (zuul-build-sshkey) 2026-01-10 13:43:37.472197 | orchestrator -> localhost | ok: Runtime: 0:00:00.020558 2026-01-10 13:43:37.479779 | 2026-01-10 13:43:37.479911 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-10 13:43:37.961988 | orchestrator | ok 2026-01-10 13:43:37.981877 | 2026-01-10 13:43:37.982059 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-10 13:43:38.029700 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:38.141683 | 2026-01-10 13:43:38.141827 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-10 13:43:38.672716 | orchestrator | ok 2026-01-10 13:43:38.696712 | 2026-01-10 13:43:38.696870 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-10 13:43:38.772715 | orchestrator | ok 2026-01-10 13:43:38.784893 | 2026-01-10 13:43:38.785103 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:39.566250 | orchestrator -> localhost | ok 2026-01-10 13:43:39.574688 | 2026-01-10 13:43:39.574818 | TASK [validate-host : Collect information about the host] 2026-01-10 13:43:41.027622 | orchestrator | ok 2026-01-10 13:43:41.056415 | 2026-01-10 13:43:41.056572 | TASK [validate-host : Sanitize hostname] 2026-01-10 13:43:41.152415 | orchestrator | ok 2026-01-10 13:43:41.158802 | 2026-01-10 13:43:41.159054 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-10 13:43:42.116719 | orchestrator -> localhost | changed 2026-01-10 13:43:42.124098 | 2026-01-10 13:43:42.127033 | TASK [validate-host : Collect information about zuul worker] 2026-01-10 13:43:42.677601 | orchestrator | ok 2026-01-10 13:43:42.687655 | 2026-01-10 13:43:42.687803 | TASK [validate-host : Write out all zuul information for each host] 2026-01-10 13:43:43.667708 | orchestrator -> localhost | changed 2026-01-10 13:43:43.681311 | 2026-01-10 13:43:43.681454 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-10 13:43:44.002172 | orchestrator | ok 2026-01-10 13:43:44.013034 | 2026-01-10 13:43:44.013186 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-10 13:44:41.120804 | orchestrator | changed: 2026-01-10 13:44:41.121199 | orchestrator | .d..t...... src/ 2026-01-10 13:44:41.121268 | orchestrator | .d..t...... src/github.com/ 2026-01-10 13:44:41.121313 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-10 13:44:41.121366 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-10 13:44:41.121417 | orchestrator | RedHat.yml 2026-01-10 13:44:41.138549 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-10 13:44:41.138567 | orchestrator | RedHat.yml 2026-01-10 13:44:41.138619 | orchestrator | = 2.2.0"... 2026-01-10 13:44:50.616135 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-10 13:44:50.634845 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-10 13:44:51.085808 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-10 13:44:51.880962 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:52.434415 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-10 13:44:53.214883 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:53.283452 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-10 13:44:54.004471 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-10 13:44:54.004612 | orchestrator | 2026-01-10 13:44:54.004624 | orchestrator | Providers are signed by their developers. 2026-01-10 13:44:54.004632 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-10 13:44:54.004640 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-10 13:44:54.004647 | orchestrator | 2026-01-10 13:44:54.004651 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-10 13:44:54.004663 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-10 13:44:54.004667 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-10 13:44:54.004671 | orchestrator | you run "tofu init" in the future. 2026-01-10 13:44:54.004853 | orchestrator | 2026-01-10 13:44:54.004865 | orchestrator | OpenTofu has been successfully initialized! 2026-01-10 13:44:54.004873 | orchestrator | 2026-01-10 13:44:54.004877 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-10 13:44:54.004885 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-10 13:44:54.004889 | orchestrator | should now work. 2026-01-10 13:44:54.004893 | orchestrator | 2026-01-10 13:44:54.004897 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-10 13:44:54.004901 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-10 13:44:54.004909 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-10 13:44:54.167413 | orchestrator | Created and switched to workspace "ci"! 2026-01-10 13:44:54.167481 | orchestrator | 2026-01-10 13:44:54.167488 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-10 13:44:54.167493 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-10 13:44:54.167500 | orchestrator | for this configuration. 2026-01-10 13:44:54.275166 | orchestrator | ci.auto.tfvars 2026-01-10 13:44:54.279940 | orchestrator | default_custom.tf 2026-01-10 13:44:55.232504 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-10 13:44:55.888715 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-10 13:44:56.072975 | orchestrator | 2026-01-10 13:44:56.073067 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-10 13:44:56.073074 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-10 13:44:56.073080 | orchestrator | + create 2026-01-10 13:44:56.073085 | orchestrator | <= read (data resources) 2026-01-10 13:44:56.073090 | orchestrator | 2026-01-10 13:44:56.073094 | orchestrator | OpenTofu will perform the following actions: 2026-01-10 13:44:56.073107 | orchestrator | 2026-01-10 13:44:56.073111 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-10 13:44:56.073115 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:56.073120 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-10 13:44:56.073124 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:56.073128 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:56.073132 | orchestrator | + file = (known after apply) 2026-01-10 13:44:56.073136 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073165 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073170 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:56.073174 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:56.073178 | orchestrator | + most_recent = true 2026-01-10 13:44:56.073182 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.073186 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:56.073190 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073197 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:56.073201 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:56.073205 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:56.073209 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:56.073212 | orchestrator | } 2026-01-10 13:44:56.073217 | orchestrator | 2026-01-10 13:44:56.073221 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-10 13:44:56.073224 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:56.073228 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-10 13:44:56.073232 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:56.073236 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:56.073240 | orchestrator | + file = (known after apply) 2026-01-10 13:44:56.073243 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073247 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073251 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:56.073254 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:56.073258 | orchestrator | + most_recent = true 2026-01-10 13:44:56.073262 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.073266 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:56.073269 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073273 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:56.073277 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:56.073281 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:56.073284 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:56.073315 | orchestrator | } 2026-01-10 13:44:56.073321 | orchestrator | 2026-01-10 13:44:56.073325 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-10 13:44:56.073329 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-10 13:44:56.073333 | orchestrator | + content = (known after apply) 2026-01-10 13:44:56.073338 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:56.073341 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:56.073345 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:56.073349 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:56.073353 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:56.073357 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:56.073360 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:56.073364 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:56.073368 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-10 13:44:56.073372 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073376 | orchestrator | } 2026-01-10 13:44:56.073380 | orchestrator | 2026-01-10 13:44:56.073383 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-10 13:44:56.073387 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-10 13:44:56.073391 | orchestrator | + content = (known after apply) 2026-01-10 13:44:56.073395 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:56.073399 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:56.073402 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:56.073406 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:56.073410 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:56.073421 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:56.073425 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:56.073429 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:56.073437 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-10 13:44:56.073441 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073445 | orchestrator | } 2026-01-10 13:44:56.073449 | orchestrator | 2026-01-10 13:44:56.073453 | orchestrator | # local_file.inventory will be created 2026-01-10 13:44:56.073456 | orchestrator | + resource "local_file" "inventory" { 2026-01-10 13:44:56.073460 | orchestrator | + content = (known after apply) 2026-01-10 13:44:56.073464 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:56.073468 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:56.073471 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:56.073475 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:56.073479 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:56.073483 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:56.073487 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:56.073491 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:56.073494 | orchestrator | + filename = "inventory.ci" 2026-01-10 13:44:56.073498 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073502 | orchestrator | } 2026-01-10 13:44:56.073508 | orchestrator | 2026-01-10 13:44:56.073512 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-10 13:44:56.073516 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-10 13:44:56.073520 | orchestrator | + content = (sensitive value) 2026-01-10 13:44:56.073524 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:56.073527 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:56.073531 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:56.073535 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:56.073539 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:56.073542 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:56.073546 | orchestrator | + directory_permission = "0700" 2026-01-10 13:44:56.073550 | orchestrator | + file_permission = "0600" 2026-01-10 13:44:56.073554 | orchestrator | + filename = ".id_rsa.ci" 2026-01-10 13:44:56.073558 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073562 | orchestrator | } 2026-01-10 13:44:56.073565 | orchestrator | 2026-01-10 13:44:56.073569 | orchestrator | # null_resource.node_semaphore will be created 2026-01-10 13:44:56.073573 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-10 13:44:56.073577 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073581 | orchestrator | } 2026-01-10 13:44:56.073584 | orchestrator | 2026-01-10 13:44:56.073588 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-10 13:44:56.073592 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-10 13:44:56.073596 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073600 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073603 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073607 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073611 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073615 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-10 13:44:56.073618 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073622 | orchestrator | + size = 80 2026-01-10 13:44:56.073626 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073630 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073634 | orchestrator | } 2026-01-10 13:44:56.073637 | orchestrator | 2026-01-10 13:44:56.073641 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-10 13:44:56.073645 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073649 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073652 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073656 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073664 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073667 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073671 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-10 13:44:56.073675 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073679 | orchestrator | + size = 80 2026-01-10 13:44:56.073683 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073686 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073690 | orchestrator | } 2026-01-10 13:44:56.073696 | orchestrator | 2026-01-10 13:44:56.073700 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-10 13:44:56.073704 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073708 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073711 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073715 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073719 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073723 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073726 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-10 13:44:56.073730 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073734 | orchestrator | + size = 80 2026-01-10 13:44:56.073738 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073741 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073745 | orchestrator | } 2026-01-10 13:44:56.073749 | orchestrator | 2026-01-10 13:44:56.073753 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-10 13:44:56.073756 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073761 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073765 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073768 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073772 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073776 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073780 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-10 13:44:56.073783 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073787 | orchestrator | + size = 80 2026-01-10 13:44:56.073794 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073798 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073801 | orchestrator | } 2026-01-10 13:44:56.073805 | orchestrator | 2026-01-10 13:44:56.073809 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-10 13:44:56.073813 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073817 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073820 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073824 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073828 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073832 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073835 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-10 13:44:56.073839 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073843 | orchestrator | + size = 80 2026-01-10 13:44:56.073847 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073850 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073854 | orchestrator | } 2026-01-10 13:44:56.073858 | orchestrator | 2026-01-10 13:44:56.073861 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-10 13:44:56.073865 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073869 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073873 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073876 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073885 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073888 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073892 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-10 13:44:56.073896 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073900 | orchestrator | + size = 80 2026-01-10 13:44:56.073904 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073907 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073911 | orchestrator | } 2026-01-10 13:44:56.073917 | orchestrator | 2026-01-10 13:44:56.073921 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-10 13:44:56.073924 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:56.073928 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073932 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073936 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073939 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.073943 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073947 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-10 13:44:56.073951 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.073954 | orchestrator | + size = 80 2026-01-10 13:44:56.073958 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.073962 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.073966 | orchestrator | } 2026-01-10 13:44:56.073969 | orchestrator | 2026-01-10 13:44:56.073973 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-10 13:44:56.073977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.073981 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.073985 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.073988 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.073992 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.073996 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-10 13:44:56.074000 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074003 | orchestrator | + size = 20 2026-01-10 13:44:56.074007 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074011 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074032 | orchestrator | } 2026-01-10 13:44:56.074036 | orchestrator | 2026-01-10 13:44:56.074040 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-10 13:44:56.074043 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074047 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074051 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074055 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074059 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074062 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-10 13:44:56.074066 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074070 | orchestrator | + size = 20 2026-01-10 13:44:56.074074 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074077 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074081 | orchestrator | } 2026-01-10 13:44:56.074085 | orchestrator | 2026-01-10 13:44:56.074089 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-10 13:44:56.074093 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074096 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074100 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074104 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074108 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074112 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-10 13:44:56.074115 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074123 | orchestrator | + size = 20 2026-01-10 13:44:56.074127 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074131 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074135 | orchestrator | } 2026-01-10 13:44:56.074141 | orchestrator | 2026-01-10 13:44:56.074145 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-10 13:44:56.074149 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074153 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074156 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074160 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074167 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074171 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-10 13:44:56.074175 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074178 | orchestrator | + size = 20 2026-01-10 13:44:56.074182 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074186 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074190 | orchestrator | } 2026-01-10 13:44:56.074194 | orchestrator | 2026-01-10 13:44:56.074197 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-10 13:44:56.074201 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074205 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074209 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074213 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074216 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074220 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-10 13:44:56.074224 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074228 | orchestrator | + size = 20 2026-01-10 13:44:56.074232 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074236 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074239 | orchestrator | } 2026-01-10 13:44:56.074243 | orchestrator | 2026-01-10 13:44:56.074247 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-10 13:44:56.074251 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074254 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074258 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074262 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074266 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074270 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-10 13:44:56.074273 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074277 | orchestrator | + size = 20 2026-01-10 13:44:56.074281 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074285 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074300 | orchestrator | } 2026-01-10 13:44:56.074304 | orchestrator | 2026-01-10 13:44:56.074308 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-10 13:44:56.074312 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074316 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074319 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074323 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074327 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074331 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-10 13:44:56.074335 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074338 | orchestrator | + size = 20 2026-01-10 13:44:56.074342 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074346 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074350 | orchestrator | } 2026-01-10 13:44:56.074354 | orchestrator | 2026-01-10 13:44:56.074368 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-10 13:44:56.074373 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074380 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074384 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074388 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074392 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074395 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-10 13:44:56.074399 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074403 | orchestrator | + size = 20 2026-01-10 13:44:56.074407 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074410 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074414 | orchestrator | } 2026-01-10 13:44:56.074421 | orchestrator | 2026-01-10 13:44:56.074424 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-10 13:44:56.074428 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:56.074432 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:56.074436 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074439 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074443 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:56.074447 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-10 13:44:56.074451 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074455 | orchestrator | + size = 20 2026-01-10 13:44:56.074458 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:56.074462 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:56.074466 | orchestrator | } 2026-01-10 13:44:56.074470 | orchestrator | 2026-01-10 13:44:56.074473 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-10 13:44:56.074477 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-10 13:44:56.074481 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.074485 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.074489 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.074492 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.074496 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074500 | orchestrator | + config_drive = true 2026-01-10 13:44:56.074513 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.074517 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.074520 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-10 13:44:56.074524 | orchestrator | + force_delete = false 2026-01-10 13:44:56.074528 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.074532 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074535 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.074539 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.074543 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.074547 | orchestrator | + name = "testbed-manager" 2026-01-10 13:44:56.074550 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.074554 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074558 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.074561 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.074565 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.074569 | orchestrator | + user_data = (sensitive value) 2026-01-10 13:44:56.074573 | orchestrator | 2026-01-10 13:44:56.074577 | orchestrator | + block_device { 2026-01-10 13:44:56.074580 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.074584 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.074588 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.074592 | orchestrator | + multiattach = false 2026-01-10 13:44:56.074595 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.074599 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074606 | orchestrator | } 2026-01-10 13:44:56.074610 | orchestrator | 2026-01-10 13:44:56.074614 | orchestrator | + network { 2026-01-10 13:44:56.074618 | orchestrator | + access_network = false 2026-01-10 13:44:56.074622 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.074626 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.074629 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.074633 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.074637 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.074641 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074644 | orchestrator | } 2026-01-10 13:44:56.074648 | orchestrator | } 2026-01-10 13:44:56.074654 | orchestrator | 2026-01-10 13:44:56.074658 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-10 13:44:56.074662 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.074666 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.074669 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.074673 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.074677 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.074681 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074684 | orchestrator | + config_drive = true 2026-01-10 13:44:56.074688 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.074692 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.074696 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.074699 | orchestrator | + force_delete = false 2026-01-10 13:44:56.074703 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.074707 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074711 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.074715 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.074718 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.074722 | orchestrator | + name = "testbed-node-0" 2026-01-10 13:44:56.074726 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.074729 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074733 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.074737 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.074741 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.074744 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.074748 | orchestrator | 2026-01-10 13:44:56.074752 | orchestrator | + block_device { 2026-01-10 13:44:56.074756 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.074759 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.074763 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.074767 | orchestrator | + multiattach = false 2026-01-10 13:44:56.074771 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.074774 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074778 | orchestrator | } 2026-01-10 13:44:56.074782 | orchestrator | 2026-01-10 13:44:56.074786 | orchestrator | + network { 2026-01-10 13:44:56.074790 | orchestrator | + access_network = false 2026-01-10 13:44:56.074793 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.074797 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.074801 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.074804 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.074808 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.074812 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074816 | orchestrator | } 2026-01-10 13:44:56.074819 | orchestrator | } 2026-01-10 13:44:56.074825 | orchestrator | 2026-01-10 13:44:56.074829 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-10 13:44:56.074833 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.074837 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.074844 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.074848 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.074852 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.074856 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.074860 | orchestrator | + config_drive = true 2026-01-10 13:44:56.074863 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.074867 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.074871 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.074875 | orchestrator | + force_delete = false 2026-01-10 13:44:56.074878 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.074882 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.074886 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.074890 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.074893 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.074897 | orchestrator | + name = "testbed-node-1" 2026-01-10 13:44:56.074901 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.074904 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.074908 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.074912 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.074916 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.074922 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.074926 | orchestrator | 2026-01-10 13:44:56.074930 | orchestrator | + block_device { 2026-01-10 13:44:56.074934 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.074937 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.074941 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.074945 | orchestrator | + multiattach = false 2026-01-10 13:44:56.074949 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.074952 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074956 | orchestrator | } 2026-01-10 13:44:56.074960 | orchestrator | 2026-01-10 13:44:56.074964 | orchestrator | + network { 2026-01-10 13:44:56.074967 | orchestrator | + access_network = false 2026-01-10 13:44:56.074971 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.074975 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.074979 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.074982 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.074986 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.074990 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.074994 | orchestrator | } 2026-01-10 13:44:56.074997 | orchestrator | } 2026-01-10 13:44:56.075003 | orchestrator | 2026-01-10 13:44:56.075007 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-10 13:44:56.075011 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.075015 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.075018 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.075022 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.075026 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.075030 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.075034 | orchestrator | + config_drive = true 2026-01-10 13:44:56.075037 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.075041 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.075045 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.075049 | orchestrator | + force_delete = false 2026-01-10 13:44:56.075053 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.075056 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075060 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.075068 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.075072 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.075076 | orchestrator | + name = "testbed-node-2" 2026-01-10 13:44:56.075079 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.075083 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075087 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.075090 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.075094 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.075098 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.075102 | orchestrator | 2026-01-10 13:44:56.075106 | orchestrator | + block_device { 2026-01-10 13:44:56.075109 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.075113 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.075117 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.075121 | orchestrator | + multiattach = false 2026-01-10 13:44:56.075124 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.075128 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075132 | orchestrator | } 2026-01-10 13:44:56.075136 | orchestrator | 2026-01-10 13:44:56.075139 | orchestrator | + network { 2026-01-10 13:44:56.075144 | orchestrator | + access_network = false 2026-01-10 13:44:56.075147 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.075151 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.075155 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.075159 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.075162 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.075166 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075170 | orchestrator | } 2026-01-10 13:44:56.075174 | orchestrator | } 2026-01-10 13:44:56.075179 | orchestrator | 2026-01-10 13:44:56.075186 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-10 13:44:56.075190 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.075194 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.075198 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.075202 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.075205 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.075209 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.075213 | orchestrator | + config_drive = true 2026-01-10 13:44:56.075217 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.075220 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.075224 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.075228 | orchestrator | + force_delete = false 2026-01-10 13:44:56.075231 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.075235 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075239 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.075243 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.075246 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.075250 | orchestrator | + name = "testbed-node-3" 2026-01-10 13:44:56.075254 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.075258 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075261 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.075265 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.075269 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.075273 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.075276 | orchestrator | 2026-01-10 13:44:56.075280 | orchestrator | + block_device { 2026-01-10 13:44:56.075284 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.075298 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.075302 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.075310 | orchestrator | + multiattach = false 2026-01-10 13:44:56.075314 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.075317 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075321 | orchestrator | } 2026-01-10 13:44:56.075325 | orchestrator | 2026-01-10 13:44:56.075329 | orchestrator | + network { 2026-01-10 13:44:56.075332 | orchestrator | + access_network = false 2026-01-10 13:44:56.075336 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.075340 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.075344 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.075347 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.075351 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.075355 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075359 | orchestrator | } 2026-01-10 13:44:56.075363 | orchestrator | } 2026-01-10 13:44:56.075368 | orchestrator | 2026-01-10 13:44:56.075372 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-10 13:44:56.075376 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.075380 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.075384 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.075388 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.075391 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.075395 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.075399 | orchestrator | + config_drive = true 2026-01-10 13:44:56.075403 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.075406 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.075410 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.075414 | orchestrator | + force_delete = false 2026-01-10 13:44:56.075418 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.075421 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075425 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.075429 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.075433 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.075436 | orchestrator | + name = "testbed-node-4" 2026-01-10 13:44:56.075440 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.075444 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075447 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.075451 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.075455 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.075459 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.075463 | orchestrator | 2026-01-10 13:44:56.075466 | orchestrator | + block_device { 2026-01-10 13:44:56.075470 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.075474 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.075478 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.075481 | orchestrator | + multiattach = false 2026-01-10 13:44:56.075485 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.075489 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075493 | orchestrator | } 2026-01-10 13:44:56.075497 | orchestrator | 2026-01-10 13:44:56.075500 | orchestrator | + network { 2026-01-10 13:44:56.075504 | orchestrator | + access_network = false 2026-01-10 13:44:56.075508 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.075512 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.075515 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.075519 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.075523 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.075527 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075530 | orchestrator | } 2026-01-10 13:44:56.075534 | orchestrator | } 2026-01-10 13:44:56.075543 | orchestrator | 2026-01-10 13:44:56.075547 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-10 13:44:56.075551 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:56.075555 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:56.075559 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:56.075562 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:56.075566 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.075570 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:56.075574 | orchestrator | + config_drive = true 2026-01-10 13:44:56.075577 | orchestrator | + created = (known after apply) 2026-01-10 13:44:56.075581 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:56.075585 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:56.075589 | orchestrator | + force_delete = false 2026-01-10 13:44:56.075592 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:56.075596 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075600 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:56.075604 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:56.075607 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:56.075611 | orchestrator | + name = "testbed-node-5" 2026-01-10 13:44:56.075615 | orchestrator | + power_state = "active" 2026-01-10 13:44:56.075619 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075622 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:56.075626 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:56.075630 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:56.075634 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:56.075637 | orchestrator | 2026-01-10 13:44:56.075641 | orchestrator | + block_device { 2026-01-10 13:44:56.075645 | orchestrator | + boot_index = 0 2026-01-10 13:44:56.075649 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:56.075652 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:56.075656 | orchestrator | + multiattach = false 2026-01-10 13:44:56.075660 | orchestrator | + source_type = "volume" 2026-01-10 13:44:56.075664 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075667 | orchestrator | } 2026-01-10 13:44:56.075671 | orchestrator | 2026-01-10 13:44:56.075675 | orchestrator | + network { 2026-01-10 13:44:56.075679 | orchestrator | + access_network = false 2026-01-10 13:44:56.075682 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:56.075686 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:56.075690 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:56.075694 | orchestrator | + name = (known after apply) 2026-01-10 13:44:56.075698 | orchestrator | + port = (known after apply) 2026-01-10 13:44:56.075701 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:56.075705 | orchestrator | } 2026-01-10 13:44:56.075709 | orchestrator | } 2026-01-10 13:44:56.075713 | orchestrator | 2026-01-10 13:44:56.075717 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-10 13:44:56.075720 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-10 13:44:56.075724 | orchestrator | + fingerprint = (known after apply) 2026-01-10 13:44:56.075728 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075732 | orchestrator | + name = "testbed" 2026-01-10 13:44:56.075736 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:56.075739 | orchestrator | + public_key = (known after apply) 2026-01-10 13:44:56.075743 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075747 | orchestrator | + user_id = (known after apply) 2026-01-10 13:44:56.075751 | orchestrator | } 2026-01-10 13:44:56.075757 | orchestrator | 2026-01-10 13:44:56.075761 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-10 13:44:56.075764 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075772 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075775 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075779 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075783 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075790 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075794 | orchestrator | } 2026-01-10 13:44:56.075798 | orchestrator | 2026-01-10 13:44:56.075801 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-10 13:44:56.075805 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075809 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075813 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075817 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075820 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075824 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075828 | orchestrator | } 2026-01-10 13:44:56.075832 | orchestrator | 2026-01-10 13:44:56.075835 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-10 13:44:56.075839 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075843 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075847 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075850 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075854 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075858 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075862 | orchestrator | } 2026-01-10 13:44:56.075865 | orchestrator | 2026-01-10 13:44:56.075869 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-10 13:44:56.075873 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075877 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075881 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075885 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075888 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075892 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075896 | orchestrator | } 2026-01-10 13:44:56.075900 | orchestrator | 2026-01-10 13:44:56.075904 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-10 13:44:56.075907 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075911 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075915 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075919 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075923 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075926 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075930 | orchestrator | } 2026-01-10 13:44:56.075934 | orchestrator | 2026-01-10 13:44:56.075938 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-10 13:44:56.075942 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075945 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075949 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075953 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075957 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.075960 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.075964 | orchestrator | } 2026-01-10 13:44:56.075968 | orchestrator | 2026-01-10 13:44:56.075972 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-10 13:44:56.075976 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.075980 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.075983 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.075987 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.075991 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076012 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.076016 | orchestrator | } 2026-01-10 13:44:56.076019 | orchestrator | 2026-01-10 13:44:56.076023 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-10 13:44:56.076027 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.076031 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.076035 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076038 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.076042 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076046 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.076050 | orchestrator | } 2026-01-10 13:44:56.076056 | orchestrator | 2026-01-10 13:44:56.076060 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-10 13:44:56.076064 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:56.076068 | orchestrator | + device = (known after apply) 2026-01-10 13:44:56.076072 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076075 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:56.076079 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076083 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:56.076087 | orchestrator | } 2026-01-10 13:44:56.076091 | orchestrator | 2026-01-10 13:44:56.076094 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-10 13:44:56.076099 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-10 13:44:56.076103 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:56.076107 | orchestrator | + floating_ip = (known after apply) 2026-01-10 13:44:56.076111 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076114 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:56.076118 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076122 | orchestrator | } 2026-01-10 13:44:56.076126 | orchestrator | 2026-01-10 13:44:56.076130 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-10 13:44:56.076134 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-10 13:44:56.076137 | orchestrator | + address = (known after apply) 2026-01-10 13:44:56.076141 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076148 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:56.076152 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076156 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:56.076160 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076164 | orchestrator | + pool = "public" 2026-01-10 13:44:56.076167 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:56.076171 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076175 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076179 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076182 | orchestrator | } 2026-01-10 13:44:56.076186 | orchestrator | 2026-01-10 13:44:56.076190 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-10 13:44:56.076194 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-10 13:44:56.076198 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076202 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076206 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:56.076210 | orchestrator | + "nova", 2026-01-10 13:44:56.076214 | orchestrator | ] 2026-01-10 13:44:56.076217 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:56.076221 | orchestrator | + external = (known after apply) 2026-01-10 13:44:56.076225 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076229 | orchestrator | + mtu = (known after apply) 2026-01-10 13:44:56.076233 | orchestrator | + name = "net-testbed-management" 2026-01-10 13:44:56.076236 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076244 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076248 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076252 | orchestrator | + shared = (known after apply) 2026-01-10 13:44:56.076256 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076259 | orchestrator | + transparent_vlan = (known after apply) 2026-01-10 13:44:56.076263 | orchestrator | 2026-01-10 13:44:56.076267 | orchestrator | + segments (known after apply) 2026-01-10 13:44:56.076271 | orchestrator | } 2026-01-10 13:44:56.076275 | orchestrator | 2026-01-10 13:44:56.076279 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-10 13:44:56.076283 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-10 13:44:56.076296 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076301 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.076305 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.076309 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076313 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.076316 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.076320 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.076324 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076328 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076331 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.076335 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.076339 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076343 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076347 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076350 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.076354 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076358 | orchestrator | 2026-01-10 13:44:56.076362 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076365 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.076369 | orchestrator | } 2026-01-10 13:44:56.076373 | orchestrator | 2026-01-10 13:44:56.076377 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.076381 | orchestrator | 2026-01-10 13:44:56.076384 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.076388 | orchestrator | + ip_address = "192.168.16.5" 2026-01-10 13:44:56.076392 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076396 | orchestrator | } 2026-01-10 13:44:56.076400 | orchestrator | } 2026-01-10 13:44:56.076406 | orchestrator | 2026-01-10 13:44:56.076410 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-10 13:44:56.076414 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.076417 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076421 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.076425 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.076429 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076433 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.076436 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.076440 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.076444 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076448 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076451 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.076455 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.076459 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076462 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076466 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076474 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.076477 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076481 | orchestrator | 2026-01-10 13:44:56.076485 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076489 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.076493 | orchestrator | } 2026-01-10 13:44:56.076496 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076500 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.076504 | orchestrator | } 2026-01-10 13:44:56.076508 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076511 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.076515 | orchestrator | } 2026-01-10 13:44:56.076519 | orchestrator | 2026-01-10 13:44:56.076523 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.076527 | orchestrator | 2026-01-10 13:44:56.076530 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.076534 | orchestrator | + ip_address = "192.168.16.10" 2026-01-10 13:44:56.076538 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076541 | orchestrator | } 2026-01-10 13:44:56.076545 | orchestrator | } 2026-01-10 13:44:56.076549 | orchestrator | 2026-01-10 13:44:56.076553 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-10 13:44:56.076557 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.076563 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076567 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.076571 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.076575 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076579 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.076582 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.076586 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.076590 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076594 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076597 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.076601 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.076605 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076609 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076612 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076616 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.076620 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076624 | orchestrator | 2026-01-10 13:44:56.076628 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076632 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.076635 | orchestrator | } 2026-01-10 13:44:56.076639 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076643 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.076647 | orchestrator | } 2026-01-10 13:44:56.076650 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076654 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.076658 | orchestrator | } 2026-01-10 13:44:56.076662 | orchestrator | 2026-01-10 13:44:56.076666 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.076669 | orchestrator | 2026-01-10 13:44:56.076673 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.076677 | orchestrator | + ip_address = "192.168.16.11" 2026-01-10 13:44:56.076681 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076684 | orchestrator | } 2026-01-10 13:44:56.076688 | orchestrator | } 2026-01-10 13:44:56.076694 | orchestrator | 2026-01-10 13:44:56.076698 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-10 13:44:56.076702 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.076705 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076709 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.076713 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.076717 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076727 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.076731 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.076735 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.076738 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076742 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076746 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.076749 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.076753 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076757 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076761 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076765 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.076769 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076773 | orchestrator | 2026-01-10 13:44:56.076776 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076780 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.076784 | orchestrator | } 2026-01-10 13:44:56.076788 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076792 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.076795 | orchestrator | } 2026-01-10 13:44:56.076799 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076803 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.076806 | orchestrator | } 2026-01-10 13:44:56.076810 | orchestrator | 2026-01-10 13:44:56.076814 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.076818 | orchestrator | 2026-01-10 13:44:56.076822 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.076825 | orchestrator | + ip_address = "192.168.16.12" 2026-01-10 13:44:56.076829 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076833 | orchestrator | } 2026-01-10 13:44:56.076837 | orchestrator | } 2026-01-10 13:44:56.076840 | orchestrator | 2026-01-10 13:44:56.076844 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-10 13:44:56.076848 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.076852 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.076856 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.076860 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.076863 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.076867 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.076871 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.076875 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.076878 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.076882 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.076886 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.076890 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.076893 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.076897 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.076901 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.076905 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.076909 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.076912 | orchestrator | 2026-01-10 13:44:56.076916 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076920 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.076924 | orchestrator | } 2026-01-10 13:44:56.076928 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076931 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.076935 | orchestrator | } 2026-01-10 13:44:56.076939 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.076943 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.076946 | orchestrator | } 2026-01-10 13:44:56.076950 | orchestrator | 2026-01-10 13:44:56.076957 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.076961 | orchestrator | 2026-01-10 13:44:56.076965 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.076969 | orchestrator | + ip_address = "192.168.16.13" 2026-01-10 13:44:56.076972 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.076976 | orchestrator | } 2026-01-10 13:44:56.076980 | orchestrator | } 2026-01-10 13:44:56.076986 | orchestrator | 2026-01-10 13:44:56.076990 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-10 13:44:56.076994 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.076998 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.077001 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.077005 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.077009 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.077013 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.077017 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.077020 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.077024 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.077031 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077035 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.077038 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.077042 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.077046 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.077050 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077054 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.077057 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077061 | orchestrator | 2026-01-10 13:44:56.077065 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077072 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.077076 | orchestrator | } 2026-01-10 13:44:56.077080 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077084 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.077088 | orchestrator | } 2026-01-10 13:44:56.077091 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077095 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.077099 | orchestrator | } 2026-01-10 13:44:56.077103 | orchestrator | 2026-01-10 13:44:56.077106 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.077110 | orchestrator | 2026-01-10 13:44:56.077114 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.077118 | orchestrator | + ip_address = "192.168.16.14" 2026-01-10 13:44:56.077122 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.077125 | orchestrator | } 2026-01-10 13:44:56.077129 | orchestrator | } 2026-01-10 13:44:56.077133 | orchestrator | 2026-01-10 13:44:56.077137 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-10 13:44:56.077140 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:56.077144 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.077148 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:56.077152 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:56.077155 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.077159 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:56.077163 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:56.077167 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:56.077170 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:56.077174 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077178 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:56.077181 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.077185 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:56.077189 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:56.077196 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077200 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:56.077204 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077208 | orchestrator | 2026-01-10 13:44:56.077211 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077215 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:56.077219 | orchestrator | } 2026-01-10 13:44:56.077223 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077226 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:56.077230 | orchestrator | } 2026-01-10 13:44:56.077234 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:56.077238 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:56.077241 | orchestrator | } 2026-01-10 13:44:56.077245 | orchestrator | 2026-01-10 13:44:56.077249 | orchestrator | + binding (known after apply) 2026-01-10 13:44:56.077253 | orchestrator | 2026-01-10 13:44:56.077256 | orchestrator | + fixed_ip { 2026-01-10 13:44:56.077260 | orchestrator | + ip_address = "192.168.16.15" 2026-01-10 13:44:56.077264 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.077268 | orchestrator | } 2026-01-10 13:44:56.077271 | orchestrator | } 2026-01-10 13:44:56.077275 | orchestrator | 2026-01-10 13:44:56.077279 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-10 13:44:56.077283 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-10 13:44:56.077306 | orchestrator | + force_destroy = false 2026-01-10 13:44:56.077311 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077314 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:56.077318 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077322 | orchestrator | + router_id = (known after apply) 2026-01-10 13:44:56.077326 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:56.077329 | orchestrator | } 2026-01-10 13:44:56.077333 | orchestrator | 2026-01-10 13:44:56.077337 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-10 13:44:56.077340 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-10 13:44:56.077344 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:56.077348 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.077352 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:56.077355 | orchestrator | + "nova", 2026-01-10 13:44:56.077359 | orchestrator | ] 2026-01-10 13:44:56.077363 | orchestrator | + distributed = (known after apply) 2026-01-10 13:44:56.077367 | orchestrator | + enable_snat = (known after apply) 2026-01-10 13:44:56.077370 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-10 13:44:56.077374 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-10 13:44:56.077378 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077382 | orchestrator | + name = "testbed" 2026-01-10 13:44:56.077385 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077389 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077393 | orchestrator | 2026-01-10 13:44:56.077397 | orchestrator | + external_fixed_ip (known after apply) 2026-01-10 13:44:56.077400 | orchestrator | } 2026-01-10 13:44:56.077408 | orchestrator | 2026-01-10 13:44:56.077411 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-10 13:44:56.077415 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-10 13:44:56.077419 | orchestrator | + description = "ssh" 2026-01-10 13:44:56.077423 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077426 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077430 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077434 | orchestrator | + port_range_max = 22 2026-01-10 13:44:56.077438 | orchestrator | + port_range_min = 22 2026-01-10 13:44:56.077441 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:56.077445 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077452 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077456 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077459 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077463 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077467 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077471 | orchestrator | } 2026-01-10 13:44:56.077474 | orchestrator | 2026-01-10 13:44:56.077478 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-10 13:44:56.077482 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-10 13:44:56.077486 | orchestrator | + description = "wireguard" 2026-01-10 13:44:56.077489 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077493 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077497 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077500 | orchestrator | + port_range_max = 51820 2026-01-10 13:44:56.077504 | orchestrator | + port_range_min = 51820 2026-01-10 13:44:56.077508 | orchestrator | + protocol = "udp" 2026-01-10 13:44:56.077512 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077515 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077519 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077523 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077527 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077530 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077534 | orchestrator | } 2026-01-10 13:44:56.077538 | orchestrator | 2026-01-10 13:44:56.077542 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-10 13:44:56.077545 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-10 13:44:56.077552 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077556 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077559 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077563 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:56.077567 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077571 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077574 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077578 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:56.077582 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077585 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077589 | orchestrator | } 2026-01-10 13:44:56.077593 | orchestrator | 2026-01-10 13:44:56.077597 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-10 13:44:56.077600 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-10 13:44:56.077604 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077608 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077612 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077615 | orchestrator | + protocol = "udp" 2026-01-10 13:44:56.077619 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077623 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077627 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077630 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:56.077634 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077638 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077641 | orchestrator | } 2026-01-10 13:44:56.077645 | orchestrator | 2026-01-10 13:44:56.077649 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-10 13:44:56.077656 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-10 13:44:56.077659 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077663 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077667 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077670 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:56.077674 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077678 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077682 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077685 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077689 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077693 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077696 | orchestrator | } 2026-01-10 13:44:56.077721 | orchestrator | 2026-01-10 13:44:56.077725 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-10 13:44:56.077729 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-10 13:44:56.077733 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077736 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077740 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077744 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:56.077748 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077756 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077760 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077764 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077768 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077772 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077775 | orchestrator | } 2026-01-10 13:44:56.077779 | orchestrator | 2026-01-10 13:44:56.077783 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-10 13:44:56.077786 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-10 13:44:56.077790 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077794 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077798 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077801 | orchestrator | + protocol = "udp" 2026-01-10 13:44:56.077805 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077809 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077813 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077816 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077820 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077824 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077828 | orchestrator | } 2026-01-10 13:44:56.077831 | orchestrator | 2026-01-10 13:44:56.077835 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-10 13:44:56.077839 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-10 13:44:56.077843 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077846 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077850 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077854 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:56.077857 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077861 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077865 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077869 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077872 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077876 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077884 | orchestrator | } 2026-01-10 13:44:56.077887 | orchestrator | 2026-01-10 13:44:56.077891 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-10 13:44:56.077895 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-10 13:44:56.077899 | orchestrator | + description = "vrrp" 2026-01-10 13:44:56.077902 | orchestrator | + direction = "ingress" 2026-01-10 13:44:56.077906 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:56.077910 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077913 | orchestrator | + protocol = "112" 2026-01-10 13:44:56.077917 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077921 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:56.077925 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:56.077928 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:56.077932 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:56.077936 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077940 | orchestrator | } 2026-01-10 13:44:56.077943 | orchestrator | 2026-01-10 13:44:56.077947 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-10 13:44:56.077951 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-10 13:44:56.077955 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.077958 | orchestrator | + description = "management security group" 2026-01-10 13:44:56.077962 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.077966 | orchestrator | + name = "testbed-management" 2026-01-10 13:44:56.077969 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.077973 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:56.077977 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.077981 | orchestrator | } 2026-01-10 13:44:56.077984 | orchestrator | 2026-01-10 13:44:56.077988 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-10 13:44:56.077992 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-10 13:44:56.077996 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.077999 | orchestrator | + description = "node security group" 2026-01-10 13:44:56.078003 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.078007 | orchestrator | + name = "testbed-node" 2026-01-10 13:44:56.078010 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.084260 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:56.084276 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.084281 | orchestrator | } 2026-01-10 13:44:56.084320 | orchestrator | 2026-01-10 13:44:56.084325 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-10 13:44:56.084331 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-10 13:44:56.084335 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:56.084339 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-10 13:44:56.084344 | orchestrator | + dns_nameservers = [ 2026-01-10 13:44:56.084348 | orchestrator | + "8.8.8.8", 2026-01-10 13:44:56.084352 | orchestrator | + "9.9.9.9", 2026-01-10 13:44:56.084356 | orchestrator | ] 2026-01-10 13:44:56.084360 | orchestrator | + enable_dhcp = true 2026-01-10 13:44:56.084364 | orchestrator | + gateway_ip = (known after apply) 2026-01-10 13:44:56.084378 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.084382 | orchestrator | + ip_version = 4 2026-01-10 13:44:56.084386 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-10 13:44:56.084390 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-10 13:44:56.084394 | orchestrator | + name = "subnet-testbed-management" 2026-01-10 13:44:56.084398 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:56.084402 | orchestrator | + no_gateway = false 2026-01-10 13:44:56.084406 | orchestrator | + region = (known after apply) 2026-01-10 13:44:56.084410 | orchestrator | + service_types = (known after apply) 2026-01-10 13:44:56.084424 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:56.084428 | orchestrator | 2026-01-10 13:44:56.084432 | orchestrator | + allocation_pool { 2026-01-10 13:44:56.084447 | orchestrator | + end = "192.168.31.250" 2026-01-10 13:44:56.084451 | orchestrator | + start = "192.168.31.200" 2026-01-10 13:44:56.084455 | orchestrator | } 2026-01-10 13:44:56.084459 | orchestrator | } 2026-01-10 13:44:56.084463 | orchestrator | 2026-01-10 13:44:56.084467 | orchestrator | # terraform_data.image will be created 2026-01-10 13:44:56.084470 | orchestrator | + resource "terraform_data" "image" { 2026-01-10 13:44:56.084474 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.084478 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:56.084482 | orchestrator | + output = (known after apply) 2026-01-10 13:44:56.084485 | orchestrator | } 2026-01-10 13:44:56.084489 | orchestrator | 2026-01-10 13:44:56.084493 | orchestrator | # terraform_data.image_node will be created 2026-01-10 13:44:56.084497 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-10 13:44:56.084501 | orchestrator | + id = (known after apply) 2026-01-10 13:44:56.084505 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:56.084508 | orchestrator | + output = (known after apply) 2026-01-10 13:44:56.084512 | orchestrator | } 2026-01-10 13:44:56.084516 | orchestrator | 2026-01-10 13:44:56.084520 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-10 13:44:56.084523 | orchestrator | 2026-01-10 13:44:56.084527 | orchestrator | Changes to Outputs: 2026-01-10 13:44:56.084531 | orchestrator | + manager_address = (sensitive value) 2026-01-10 13:44:56.084535 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:56.335754 | orchestrator | terraform_data.image_node: Creating... 2026-01-10 13:44:56.335924 | orchestrator | terraform_data.image: Creating... 2026-01-10 13:44:56.336361 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6ffbffe3-3f4a-0d1f-dd43-7b3d1c9840a3] 2026-01-10 13:44:56.337056 | orchestrator | terraform_data.image: Creation complete after 0s [id=543b27b3-9a28-55dc-9dbf-ad34d1fee8f6] 2026-01-10 13:44:56.364312 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-10 13:44:56.383870 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-10 13:44:56.388100 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-10 13:44:56.388446 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-10 13:44:56.399594 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-10 13:44:56.402395 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-10 13:44:56.403335 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-10 13:44:56.404205 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-10 13:44:56.405074 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-10 13:44:56.431348 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-10 13:44:56.845376 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:56.849491 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-10 13:44:56.857120 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:56.859615 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-10 13:44:56.888851 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-10 13:44:56.894495 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-10 13:44:57.404838 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=81faf83c-cd8d-4e23-be3b-b569d1234bfb] 2026-01-10 13:44:57.407948 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-10 13:45:00.023517 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=e023e992-ae40-4cae-8e0e-c078bcc164d6] 2026-01-10 13:45:00.031530 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-10 13:45:00.039718 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=4c46785e-60ba-460b-8af0-69ed9944293e] 2026-01-10 13:45:00.048422 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=f60c9e3f-4fb9-4762-8319-6decaa6c25a2] 2026-01-10 13:45:00.054345 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-10 13:45:00.072075 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-10 13:45:00.072426 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=fb1cd23c-1eba-48f8-b0af-e37f12bddfbe] 2026-01-10 13:45:00.073306 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=56640cac-7dbd-450f-ace0-5456f0f7a79c] 2026-01-10 13:45:00.079813 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-10 13:45:00.080077 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-10 13:45:00.090149 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=6601bfae-4805-46bf-9ab8-35c841e000dc] 2026-01-10 13:45:00.095442 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-10 13:45:00.131660 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=2ce7cca4-0817-4dba-a1e7-697e67028341] 2026-01-10 13:45:00.139670 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-10 13:45:00.144403 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=644eb2b6-5717-40d5-adcd-cd376a39a92a] 2026-01-10 13:45:00.150537 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-10 13:45:00.157878 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=80389416-edd4-4aaf-b80d-5b05821e7076] 2026-01-10 13:45:00.163937 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-10 13:45:00.221266 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=71297883db0414c8697064c4b9793fa21d295d7a] 2026-01-10 13:45:00.222220 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=9d6b7b023193156eba32d7f6848d5ec0ede100fb] 2026-01-10 13:45:00.746459 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=7f640e79-0930-428c-9fe4-b318fed0d151] 2026-01-10 13:45:01.141333 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c85120d4-6bde-42e5-8d3e-c22d9155d28a] 2026-01-10 13:45:01.144283 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-10 13:45:03.452207 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=8c985bfc-a5bb-40d1-ad90-a588790d178e] 2026-01-10 13:45:03.482557 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=ce9993ed-1047-42a3-ac7e-aedc9bfe346e] 2026-01-10 13:45:03.495649 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=20f34273-2e89-4d41-972e-9d1b835af58f] 2026-01-10 13:45:03.519061 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4] 2026-01-10 13:45:03.547245 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=8fa62895-cbfb-4207-9a20-878bfa0ed6d1] 2026-01-10 13:45:03.559771 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=9218c5d8-5f0e-4ef3-b14f-4b2502394196] 2026-01-10 13:45:04.304264 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=25d45fa2-4024-408f-ba90-858da1ea509f] 2026-01-10 13:45:04.310103 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-10 13:45:04.313189 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-10 13:45:04.313572 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-10 13:45:04.558102 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=7ad0a595-537e-4fd9-b0ac-e5e759ad4c31] 2026-01-10 13:45:04.576278 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-10 13:45:04.577223 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-10 13:45:04.580043 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-10 13:45:04.580779 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-10 13:45:04.580954 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-10 13:45:04.583560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-10 13:45:04.610336 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=35473005-67f8-4250-9b39-ef39e06e61bd] 2026-01-10 13:45:04.616497 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-10 13:45:04.620368 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-10 13:45:04.620631 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-10 13:45:04.852915 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b47aa27d-81d3-49c0-8b27-932c4f9502c6] 2026-01-10 13:45:04.864695 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-10 13:45:04.970057 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=44b86a7b-b29f-433e-8ef2-61716f78a8c0] 2026-01-10 13:45:04.978945 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-10 13:45:05.026049 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=46b43e49-451a-492c-9ade-5750f9d21eac] 2026-01-10 13:45:05.035898 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-10 13:45:05.178881 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=6365f743-cc33-4002-8898-5e06247ef4e2] 2026-01-10 13:45:05.189816 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-10 13:45:05.262084 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=c3f16177-f18f-43d9-a589-89daf39a2c46] 2026-01-10 13:45:05.271265 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-10 13:45:05.334876 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8620c5b2-68e9-4194-8974-4f3f42e8e5b9] 2026-01-10 13:45:05.341396 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-10 13:45:05.357996 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=c1bebf96-dbef-4b1a-826e-cbbed58f3638] 2026-01-10 13:45:05.362526 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-10 13:45:05.527602 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=e5e3d170-0cec-4aaf-8a6f-f2dc04428fa8] 2026-01-10 13:45:05.573824 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=19e24d79-810c-4539-a3ac-a0e1bf857848] 2026-01-10 13:45:05.629563 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9f0d93f3-7356-4748-a507-9802b73b3be2] 2026-01-10 13:45:05.699350 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=19e4fa68-284e-4fe0-844c-b3fe49195012] 2026-01-10 13:45:05.759838 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=b67f3d57-54a9-4858-a475-f661c8a5214d] 2026-01-10 13:45:05.845082 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=4984e1bb-fc36-4d9e-a573-886e75c5eccf] 2026-01-10 13:45:06.150749 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=74a59b1d-2677-46d7-8908-bab13f025adf] 2026-01-10 13:45:06.189706 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e29460d0-2023-476b-b93a-e4597982ac0b] 2026-01-10 13:45:06.331385 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=4deeee36-b5bd-4bbc-8bcd-89b4a6cae54b] 2026-01-10 13:45:08.428004 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=61710be9-a7ad-4567-8d36-8b287c6d7f5f] 2026-01-10 13:45:08.446879 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-10 13:45:08.455431 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-10 13:45:08.467181 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-10 13:45:08.467988 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-10 13:45:08.472204 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-10 13:45:08.472572 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-10 13:45:08.480708 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-10 13:45:10.673972 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=fc952b8b-44e0-4b58-957e-fa6d7d4d323f] 2026-01-10 13:45:10.690609 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-10 13:45:10.693442 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-10 13:45:10.701242 | orchestrator | local_file.inventory: Creating... 2026-01-10 13:45:10.702113 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=79fcda97a57264b7b956a271f4e1c465cdd2c67b] 2026-01-10 13:45:10.706559 | orchestrator | local_file.inventory: Creation complete after 0s [id=1e9e789ff88c1b98e3aaa5002d8ef26e93fd9c73] 2026-01-10 13:45:11.787945 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=fc952b8b-44e0-4b58-957e-fa6d7d4d323f] 2026-01-10 13:45:18.460419 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-10 13:45:18.473433 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-10 13:45:18.477653 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-10 13:45:18.477707 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-10 13:45:18.478919 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-10 13:45:18.483322 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-10 13:45:28.469119 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-10 13:45:28.474473 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-10 13:45:28.478657 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-10 13:45:28.478732 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-10 13:45:28.480024 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-10 13:45:28.484138 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-10 13:45:38.478414 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-10 13:45:38.478525 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-10 13:45:38.479575 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-10 13:45:38.479600 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-10 13:45:38.480896 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-10 13:45:38.484307 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-10 13:45:39.095031 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=9a3e3b08-d3e0-4377-b2fc-232e5c192fb2] 2026-01-10 13:45:39.192714 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=807d7d4f-01e6-4e8b-b46d-0b31726a5fd7] 2026-01-10 13:45:39.195681 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=3c6b32ea-7448-41dd-9822-b185c208b8d7] 2026-01-10 13:45:39.317405 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=1336fb06-808a-4065-8169-79c6967c9ff6] 2026-01-10 13:45:39.740725 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=a6a59f1c-5518-437e-9f8f-3b66b116e5a4] 2026-01-10 13:45:48.485265 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-10 13:45:49.437377 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=2d20fc83-ebd6-45bf-b3da-23216c9da71c] 2026-01-10 13:45:49.448015 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-10 13:45:49.457871 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8439499318016909905] 2026-01-10 13:45:49.472608 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-10 13:45:49.472980 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-10 13:45:49.474405 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-10 13:45:49.475944 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-10 13:45:49.480458 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-10 13:45:49.482467 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-10 13:45:49.496599 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-10 13:45:49.497085 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-10 13:45:49.501161 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-10 13:45:49.508605 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-10 13:45:52.884009 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=a6a59f1c-5518-437e-9f8f-3b66b116e5a4/56640cac-7dbd-450f-ace0-5456f0f7a79c] 2026-01-10 13:45:52.886418 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=1336fb06-808a-4065-8169-79c6967c9ff6/e023e992-ae40-4cae-8e0e-c078bcc164d6] 2026-01-10 13:45:52.918357 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=9a3e3b08-d3e0-4377-b2fc-232e5c192fb2/644eb2b6-5717-40d5-adcd-cd376a39a92a] 2026-01-10 13:45:52.950610 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=a6a59f1c-5518-437e-9f8f-3b66b116e5a4/f60c9e3f-4fb9-4762-8319-6decaa6c25a2] 2026-01-10 13:45:53.109192 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=9a3e3b08-d3e0-4377-b2fc-232e5c192fb2/2ce7cca4-0817-4dba-a1e7-697e67028341] 2026-01-10 13:45:53.153185 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=1336fb06-808a-4065-8169-79c6967c9ff6/80389416-edd4-4aaf-b80d-5b05821e7076] 2026-01-10 13:45:59.191346 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=a6a59f1c-5518-437e-9f8f-3b66b116e5a4/4c46785e-60ba-460b-8af0-69ed9944293e] 2026-01-10 13:45:59.226717 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=9a3e3b08-d3e0-4377-b2fc-232e5c192fb2/fb1cd23c-1eba-48f8-b0af-e37f12bddfbe] 2026-01-10 13:45:59.341118 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=1336fb06-808a-4065-8169-79c6967c9ff6/6601bfae-4805-46bf-9ab8-35c841e000dc] 2026-01-10 13:45:59.510315 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-10 13:46:09.519607 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-10 13:46:10.027326 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=bf60ec4b-85de-4bac-b1c3-6195d762a133] 2026-01-10 13:46:10.047250 | orchestrator | 2026-01-10 13:46:10.047360 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-10 13:46:10.047369 | orchestrator | 2026-01-10 13:46:10.047373 | orchestrator | Outputs: 2026-01-10 13:46:10.047378 | orchestrator | 2026-01-10 13:46:10.047382 | orchestrator | manager_address = 2026-01-10 13:46:10.047387 | orchestrator | private_key = 2026-01-10 13:46:10.469934 | orchestrator | ok: Runtime: 0:01:19.687922 2026-01-10 13:46:10.502133 | 2026-01-10 13:46:10.502281 | TASK [Create infrastructure (stable)] 2026-01-10 13:46:11.045051 | orchestrator | skipping: Conditional result was False 2026-01-10 13:46:11.064126 | 2026-01-10 13:46:11.064364 | TASK [Fetch manager address] 2026-01-10 13:46:11.591958 | orchestrator | ok 2026-01-10 13:46:11.601546 | 2026-01-10 13:46:11.601688 | TASK [Set manager_host address] 2026-01-10 13:46:11.675217 | orchestrator | ok 2026-01-10 13:46:11.687627 | 2026-01-10 13:46:11.687783 | LOOP [Update ansible collections] 2026-01-10 13:46:12.806534 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:12.807093 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:12.807167 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:12.807200 | orchestrator | Process install dependency map 2026-01-10 13:46:12.807229 | orchestrator | Starting collection install process 2026-01-10 13:46:12.807256 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-01-10 13:46:12.807289 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-01-10 13:46:12.807321 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-10 13:46:12.807386 | orchestrator | ok: Item: commons Runtime: 0:00:00.709951 2026-01-10 13:46:13.836058 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:13.836258 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:13.836311 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:13.836352 | orchestrator | Process install dependency map 2026-01-10 13:46:13.836411 | orchestrator | Starting collection install process 2026-01-10 13:46:13.836452 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-01-10 13:46:13.836489 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-01-10 13:46:13.836522 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-10 13:46:13.836573 | orchestrator | ok: Item: services Runtime: 0:00:00.744490 2026-01-10 13:46:13.859773 | 2026-01-10 13:46:13.860003 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:46:24.513139 | orchestrator | ok 2026-01-10 13:46:24.524374 | 2026-01-10 13:46:24.524521 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 13:47:24.562753 | orchestrator | ok 2026-01-10 13:47:24.570332 | 2026-01-10 13:47:24.570456 | TASK [Fetch manager ssh hostkey] 2026-01-10 13:47:26.159282 | orchestrator | Output suppressed because no_log was given 2026-01-10 13:47:26.176226 | 2026-01-10 13:47:26.176436 | TASK [Get ssh keypair from terraform environment] 2026-01-10 13:47:26.719941 | orchestrator | ok: Runtime: 0:00:00.009609 2026-01-10 13:47:26.735294 | 2026-01-10 13:47:26.735477 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:47:26.774937 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-10 13:47:26.784696 | 2026-01-10 13:47:26.784854 | TASK [Run manager part 0] 2026-01-10 13:47:27.836718 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:47:27.901155 | orchestrator | 2026-01-10 13:47:27.901201 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-10 13:47:27.901211 | orchestrator | 2026-01-10 13:47:27.901227 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-10 13:47:29.745224 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:29.745317 | orchestrator | 2026-01-10 13:47:29.745341 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:47:29.745351 | orchestrator | 2026-01-10 13:47:29.745360 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:47:31.662742 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:31.662896 | orchestrator | 2026-01-10 13:47:31.662913 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:47:32.379062 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:32.379145 | orchestrator | 2026-01-10 13:47:32.379159 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:47:32.435538 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.435608 | orchestrator | 2026-01-10 13:47:32.435620 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-10 13:47:32.471816 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.471912 | orchestrator | 2026-01-10 13:47:32.471926 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:47:32.503635 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.503710 | orchestrator | 2026-01-10 13:47:32.503721 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:47:32.533293 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.533356 | orchestrator | 2026-01-10 13:47:32.533364 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:47:32.564828 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.564896 | orchestrator | 2026-01-10 13:47:32.564909 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-10 13:47:32.599022 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.599077 | orchestrator | 2026-01-10 13:47:32.599088 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-10 13:47:32.628220 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:32.628312 | orchestrator | 2026-01-10 13:47:32.628321 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-10 13:47:33.318950 | orchestrator | changed: [testbed-manager] 2026-01-10 13:47:33.318991 | orchestrator | 2026-01-10 13:47:33.318998 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-10 13:50:12.977883 | orchestrator | changed: [testbed-manager] 2026-01-10 13:50:12.978881 | orchestrator | 2026-01-10 13:50:12.978898 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 13:51:30.591760 | orchestrator | changed: [testbed-manager] 2026-01-10 13:51:30.591866 | orchestrator | 2026-01-10 13:51:30.591882 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:51:49.901784 | orchestrator | changed: [testbed-manager] 2026-01-10 13:51:49.901859 | orchestrator | 2026-01-10 13:51:49.901871 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:51:58.415833 | orchestrator | changed: [testbed-manager] 2026-01-10 13:51:58.415894 | orchestrator | 2026-01-10 13:51:58.415908 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:51:58.463803 | orchestrator | ok: [testbed-manager] 2026-01-10 13:51:58.463887 | orchestrator | 2026-01-10 13:51:58.463903 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-10 13:51:59.251030 | orchestrator | ok: [testbed-manager] 2026-01-10 13:51:59.251077 | orchestrator | 2026-01-10 13:51:59.251089 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-10 13:51:59.993309 | orchestrator | changed: [testbed-manager] 2026-01-10 13:51:59.993399 | orchestrator | 2026-01-10 13:51:59.993416 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-10 13:52:06.436531 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:06.436637 | orchestrator | 2026-01-10 13:52:06.436675 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-10 13:52:11.930165 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:11.930303 | orchestrator | 2026-01-10 13:52:11.930325 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-10 13:52:14.677920 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:14.677999 | orchestrator | 2026-01-10 13:52:14.678043 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-10 13:52:16.481082 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:16.481197 | orchestrator | 2026-01-10 13:52:16.481240 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-10 13:52:17.595594 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:52:17.595647 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:52:17.595653 | orchestrator | 2026-01-10 13:52:17.595659 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-10 13:52:17.639682 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:52:17.639753 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:52:17.639762 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:52:17.639768 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:52:20.933633 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:52:20.933682 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:52:20.933686 | orchestrator | 2026-01-10 13:52:20.933692 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-10 13:52:21.492404 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:21.492509 | orchestrator | 2026-01-10 13:52:21.492526 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-10 13:53:43.346462 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-10 13:53:43.346521 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-10 13:53:43.346532 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-10 13:53:43.346539 | orchestrator | 2026-01-10 13:53:43.346547 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-10 13:53:45.634876 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-10 13:53:45.689792 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-10 13:53:45.689848 | orchestrator | 2026-01-10 13:53:45.689861 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-10 13:53:45.689874 | orchestrator | 2026-01-10 13:53:45.689885 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:53:46.996374 | orchestrator | ok: [testbed-manager] 2026-01-10 13:53:46.996434 | orchestrator | 2026-01-10 13:53:46.996450 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 13:53:47.050688 | orchestrator | ok: [testbed-manager] 2026-01-10 13:53:47.050798 | orchestrator | 2026-01-10 13:53:47.050812 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 13:53:47.124043 | orchestrator | ok: [testbed-manager] 2026-01-10 13:53:47.124102 | orchestrator | 2026-01-10 13:53:47.124116 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 13:53:47.875352 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:47.875452 | orchestrator | 2026-01-10 13:53:47.875471 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 13:53:48.603091 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:48.603182 | orchestrator | 2026-01-10 13:53:48.603199 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 13:53:49.971118 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-10 13:53:49.971191 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-10 13:53:49.971199 | orchestrator | 2026-01-10 13:53:49.971215 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 13:53:51.411193 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:51.411323 | orchestrator | 2026-01-10 13:53:51.411342 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 13:53:53.199089 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 13:53:53.199278 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-10 13:53:53.199285 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-10 13:53:53.199291 | orchestrator | 2026-01-10 13:53:53.199297 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 13:53:53.256907 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:53.256952 | orchestrator | 2026-01-10 13:53:53.256960 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 13:53:53.328450 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:53.328514 | orchestrator | 2026-01-10 13:53:53.328523 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 13:53:53.867130 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:53.867252 | orchestrator | 2026-01-10 13:53:53.867271 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 13:53:53.938914 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:53.938962 | orchestrator | 2026-01-10 13:53:53.938968 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 13:53:54.793476 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:53:54.793572 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:54.793590 | orchestrator | 2026-01-10 13:53:54.793603 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 13:53:54.835186 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:54.835301 | orchestrator | 2026-01-10 13:53:54.835330 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 13:53:54.876188 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:54.876276 | orchestrator | 2026-01-10 13:53:54.876292 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 13:53:54.914827 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:54.914917 | orchestrator | 2026-01-10 13:53:54.914936 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 13:53:54.993200 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:53:54.993260 | orchestrator | 2026-01-10 13:53:54.993266 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 13:53:55.690373 | orchestrator | ok: [testbed-manager] 2026-01-10 13:53:55.690409 | orchestrator | 2026-01-10 13:53:55.690415 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:53:55.690420 | orchestrator | 2026-01-10 13:53:55.690424 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:53:57.096110 | orchestrator | ok: [testbed-manager] 2026-01-10 13:53:57.096253 | orchestrator | 2026-01-10 13:53:57.096272 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-10 13:53:58.038979 | orchestrator | changed: [testbed-manager] 2026-01-10 13:53:58.039015 | orchestrator | 2026-01-10 13:53:58.039021 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:53:58.039027 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-10 13:53:58.039031 | orchestrator | 2026-01-10 13:53:58.563735 | orchestrator | ok: Runtime: 0:06:31.039416 2026-01-10 13:53:58.585860 | 2026-01-10 13:53:58.586056 | TASK [Point out that the log in on the manager is now possible] 2026-01-10 13:53:58.634542 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-10 13:53:58.644033 | 2026-01-10 13:53:58.644188 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:53:58.678006 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-10 13:53:58.688376 | 2026-01-10 13:53:58.688549 | TASK [Run manager part 1 + 2] 2026-01-10 13:53:59.570574 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:53:59.626958 | orchestrator | 2026-01-10 13:53:59.627010 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-10 13:53:59.627017 | orchestrator | 2026-01-10 13:53:59.627029 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:54:02.500024 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:02.500107 | orchestrator | 2026-01-10 13:54:02.500148 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:54:02.536635 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:54:02.536710 | orchestrator | 2026-01-10 13:54:02.536729 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:54:02.570659 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:02.570745 | orchestrator | 2026-01-10 13:54:02.570766 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 13:54:02.599997 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:02.600074 | orchestrator | 2026-01-10 13:54:02.600092 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 13:54:02.662631 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:02.662725 | orchestrator | 2026-01-10 13:54:02.662745 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 13:54:02.725195 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:02.725286 | orchestrator | 2026-01-10 13:54:02.725305 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 13:54:02.771923 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-10 13:54:02.772003 | orchestrator | 2026-01-10 13:54:02.772018 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 13:54:03.486459 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:03.486543 | orchestrator | 2026-01-10 13:54:03.486561 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 13:54:03.539096 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:54:03.539227 | orchestrator | 2026-01-10 13:54:03.539245 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 13:54:04.926895 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:04.926994 | orchestrator | 2026-01-10 13:54:04.927015 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 13:54:05.480228 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:05.480280 | orchestrator | 2026-01-10 13:54:05.480287 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 13:54:06.611307 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:06.611399 | orchestrator | 2026-01-10 13:54:06.611418 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 13:54:20.761417 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:20.761515 | orchestrator | 2026-01-10 13:54:20.761532 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:54:21.458631 | orchestrator | ok: [testbed-manager] 2026-01-10 13:54:21.458700 | orchestrator | 2026-01-10 13:54:21.458718 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:54:21.510477 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:54:21.510519 | orchestrator | 2026-01-10 13:54:21.510524 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-10 13:54:22.411056 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:22.411123 | orchestrator | 2026-01-10 13:54:22.411138 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-10 13:54:23.363023 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:23.363071 | orchestrator | 2026-01-10 13:54:23.363080 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-10 13:54:23.931438 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:23.931523 | orchestrator | 2026-01-10 13:54:23.931540 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-10 13:54:23.971982 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:54:23.972091 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:54:23.972107 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:54:23.972119 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:54:25.896539 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:25.896587 | orchestrator | 2026-01-10 13:54:25.896596 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-10 13:54:34.866374 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-10 13:54:34.866435 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-10 13:54:34.866443 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-10 13:54:34.866449 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-10 13:54:34.866459 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-10 13:54:34.866463 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-10 13:54:34.866468 | orchestrator | 2026-01-10 13:54:34.866473 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-10 13:54:35.917876 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:35.918402 | orchestrator | 2026-01-10 13:54:35.918430 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-10 13:54:35.963437 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:54:35.963530 | orchestrator | 2026-01-10 13:54:35.963548 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-10 13:54:38.941982 | orchestrator | changed: [testbed-manager] 2026-01-10 13:54:38.942478 | orchestrator | 2026-01-10 13:54:38.942505 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-10 13:54:38.986711 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:54:38.986809 | orchestrator | 2026-01-10 13:54:38.986826 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-10 13:56:14.180617 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:14.180715 | orchestrator | 2026-01-10 13:56:14.180735 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 13:56:15.303836 | orchestrator | ok: [testbed-manager] 2026-01-10 13:56:15.303892 | orchestrator | 2026-01-10 13:56:15.303899 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:56:15.303906 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-10 13:56:15.303911 | orchestrator | 2026-01-10 13:56:15.822057 | orchestrator | ok: Runtime: 0:02:16.355350 2026-01-10 13:56:15.831513 | 2026-01-10 13:56:15.831703 | TASK [Reboot manager] 2026-01-10 13:56:17.368034 | orchestrator | ok: Runtime: 0:00:00.941978 2026-01-10 13:56:17.384856 | 2026-01-10 13:56:17.385080 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:56:32.222133 | orchestrator | ok 2026-01-10 13:56:32.233436 | 2026-01-10 13:56:32.233633 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 13:57:32.281197 | orchestrator | ok 2026-01-10 13:57:32.291799 | 2026-01-10 13:57:32.291990 | TASK [Deploy manager + bootstrap nodes] 2026-01-10 13:57:34.732242 | orchestrator | 2026-01-10 13:57:34.732460 | orchestrator | # DEPLOY MANAGER 2026-01-10 13:57:34.732483 | orchestrator | 2026-01-10 13:57:34.732497 | orchestrator | + set -e 2026-01-10 13:57:34.732510 | orchestrator | + echo 2026-01-10 13:57:34.732523 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-10 13:57:34.732539 | orchestrator | + echo 2026-01-10 13:57:34.732585 | orchestrator | + cat /opt/manager-vars.sh 2026-01-10 13:57:34.735531 | orchestrator | export NUMBER_OF_NODES=6 2026-01-10 13:57:34.735554 | orchestrator | 2026-01-10 13:57:34.735567 | orchestrator | export CEPH_VERSION=reef 2026-01-10 13:57:34.735579 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-10 13:57:34.735591 | orchestrator | export MANAGER_VERSION=latest 2026-01-10 13:57:34.735612 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-10 13:57:34.735622 | orchestrator | 2026-01-10 13:57:34.735639 | orchestrator | export ARA=false 2026-01-10 13:57:34.735649 | orchestrator | export DEPLOY_MODE=manager 2026-01-10 13:57:34.735666 | orchestrator | export TEMPEST=false 2026-01-10 13:57:34.735677 | orchestrator | export IS_ZUUL=true 2026-01-10 13:57:34.735686 | orchestrator | 2026-01-10 13:57:34.735703 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 13:57:34.735714 | orchestrator | export EXTERNAL_API=false 2026-01-10 13:57:34.735724 | orchestrator | 2026-01-10 13:57:34.735733 | orchestrator | export IMAGE_USER=ubuntu 2026-01-10 13:57:34.735746 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-10 13:57:34.735756 | orchestrator | 2026-01-10 13:57:34.735765 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-10 13:57:34.735781 | orchestrator | 2026-01-10 13:57:34.735791 | orchestrator | + echo 2026-01-10 13:57:34.735803 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 13:57:34.736549 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 13:57:34.736567 | orchestrator | ++ INTERACTIVE=false 2026-01-10 13:57:34.736579 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 13:57:34.736591 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 13:57:34.736715 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 13:57:34.736730 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 13:57:34.736742 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 13:57:34.736752 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 13:57:34.736766 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 13:57:34.736789 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 13:57:34.736807 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 13:57:34.736823 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 13:57:34.736840 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 13:57:34.736855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 13:57:34.736883 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 13:57:34.736906 | orchestrator | ++ export ARA=false 2026-01-10 13:57:34.736922 | orchestrator | ++ ARA=false 2026-01-10 13:57:34.736938 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 13:57:34.736953 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 13:57:34.736968 | orchestrator | ++ export TEMPEST=false 2026-01-10 13:57:34.736985 | orchestrator | ++ TEMPEST=false 2026-01-10 13:57:34.736999 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 13:57:34.737014 | orchestrator | ++ IS_ZUUL=true 2026-01-10 13:57:34.737031 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 13:57:34.737046 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 13:57:34.737062 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 13:57:34.737078 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 13:57:34.737093 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 13:57:34.737109 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 13:57:34.737123 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 13:57:34.737133 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 13:57:34.737143 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 13:57:34.737152 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 13:57:34.737162 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-10 13:57:34.792587 | orchestrator | + docker version 2026-01-10 13:57:35.061609 | orchestrator | Client: Docker Engine - Community 2026-01-10 13:57:35.061745 | orchestrator | Version: 27.5.1 2026-01-10 13:57:35.061774 | orchestrator | API version: 1.47 2026-01-10 13:57:35.061787 | orchestrator | Go version: go1.22.11 2026-01-10 13:57:35.061799 | orchestrator | Git commit: 9f9e405 2026-01-10 13:57:35.061810 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 13:57:35.061822 | orchestrator | OS/Arch: linux/amd64 2026-01-10 13:57:35.061833 | orchestrator | Context: default 2026-01-10 13:57:35.061844 | orchestrator | 2026-01-10 13:57:35.061856 | orchestrator | Server: Docker Engine - Community 2026-01-10 13:57:35.061867 | orchestrator | Engine: 2026-01-10 13:57:35.061879 | orchestrator | Version: 27.5.1 2026-01-10 13:57:35.061890 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-10 13:57:35.061947 | orchestrator | Go version: go1.22.11 2026-01-10 13:57:35.061970 | orchestrator | Git commit: 4c9b3b0 2026-01-10 13:57:35.061989 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 13:57:35.062007 | orchestrator | OS/Arch: linux/amd64 2026-01-10 13:57:35.062058 | orchestrator | Experimental: false 2026-01-10 13:57:35.062070 | orchestrator | containerd: 2026-01-10 13:57:35.062081 | orchestrator | Version: v2.2.1 2026-01-10 13:57:35.062092 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-10 13:57:35.062104 | orchestrator | runc: 2026-01-10 13:57:35.062115 | orchestrator | Version: 1.3.4 2026-01-10 13:57:35.062126 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-10 13:57:35.062136 | orchestrator | docker-init: 2026-01-10 13:57:35.062147 | orchestrator | Version: 0.19.0 2026-01-10 13:57:35.062159 | orchestrator | GitCommit: de40ad0 2026-01-10 13:57:35.065601 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-10 13:57:35.074880 | orchestrator | + set -e 2026-01-10 13:57:35.074935 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 13:57:35.074947 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 13:57:35.074959 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 13:57:35.074970 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 13:57:35.074981 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 13:57:35.074992 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 13:57:35.075004 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 13:57:35.075015 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 13:57:35.075026 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 13:57:35.075037 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 13:57:35.075048 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 13:57:35.075059 | orchestrator | ++ export ARA=false 2026-01-10 13:57:35.075070 | orchestrator | ++ ARA=false 2026-01-10 13:57:35.075081 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 13:57:35.075092 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 13:57:35.075103 | orchestrator | ++ export TEMPEST=false 2026-01-10 13:57:35.075114 | orchestrator | ++ TEMPEST=false 2026-01-10 13:57:35.075134 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 13:57:35.075145 | orchestrator | ++ IS_ZUUL=true 2026-01-10 13:57:35.075156 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 13:57:35.075167 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 13:57:35.075178 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 13:57:35.075189 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 13:57:35.075199 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 13:57:35.075245 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 13:57:35.075257 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 13:57:35.075268 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 13:57:35.075279 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 13:57:35.075290 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 13:57:35.075301 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 13:57:35.075312 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 13:57:35.075322 | orchestrator | ++ INTERACTIVE=false 2026-01-10 13:57:35.075333 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 13:57:35.075349 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 13:57:35.075360 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 13:57:35.075376 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 13:57:35.075387 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-10 13:57:35.082803 | orchestrator | + set -e 2026-01-10 13:57:35.082880 | orchestrator | + VERSION=reef 2026-01-10 13:57:35.084260 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:57:35.089644 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-10 13:57:35.089710 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:57:35.095201 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-10 13:57:35.101769 | orchestrator | + set -e 2026-01-10 13:57:35.101826 | orchestrator | + VERSION=2024.2 2026-01-10 13:57:35.102452 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:57:35.106149 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-10 13:57:35.106200 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:57:35.111624 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-10 13:57:35.112509 | orchestrator | ++ semver latest 7.0.0 2026-01-10 13:57:35.177296 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 13:57:35.177380 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 13:57:35.177394 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-10 13:57:35.178186 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 13:57:35.240478 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 13:57:35.241242 | orchestrator | ++ semver 2024.2 2025.1 2026-01-10 13:57:35.296301 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 13:57:35.296382 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-10 13:57:35.387289 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 13:57:35.388646 | orchestrator | + source /opt/venv/bin/activate 2026-01-10 13:57:35.389953 | orchestrator | ++ deactivate nondestructive 2026-01-10 13:57:35.389996 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:57:35.390002 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:57:35.390008 | orchestrator | ++ hash -r 2026-01-10 13:57:35.390089 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:57:35.390098 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-10 13:57:35.390103 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-10 13:57:35.390113 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-10 13:57:35.390149 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-10 13:57:35.390269 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-10 13:57:35.390278 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-10 13:57:35.390283 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-10 13:57:35.390351 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 13:57:35.390359 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 13:57:35.390397 | orchestrator | ++ export PATH 2026-01-10 13:57:35.390498 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:57:35.390539 | orchestrator | ++ '[' -z '' ']' 2026-01-10 13:57:35.390640 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-10 13:57:35.390648 | orchestrator | ++ PS1='(venv) ' 2026-01-10 13:57:35.390653 | orchestrator | ++ export PS1 2026-01-10 13:57:35.390658 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-10 13:57:35.390721 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-10 13:57:35.390729 | orchestrator | ++ hash -r 2026-01-10 13:57:35.390819 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-10 13:57:36.585579 | orchestrator | 2026-01-10 13:57:36.585665 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-10 13:57:36.585675 | orchestrator | 2026-01-10 13:57:36.585682 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 13:57:37.149188 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:37.149338 | orchestrator | 2026-01-10 13:57:37.149357 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 13:57:38.110978 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:38.111075 | orchestrator | 2026-01-10 13:57:38.111092 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-10 13:57:38.111105 | orchestrator | 2026-01-10 13:57:38.111116 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:57:40.322074 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:40.322176 | orchestrator | 2026-01-10 13:57:40.322187 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-10 13:57:40.377154 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:40.377255 | orchestrator | 2026-01-10 13:57:40.377268 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-10 13:57:40.837717 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:40.837822 | orchestrator | 2026-01-10 13:57:40.837839 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-10 13:57:40.868507 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:40.868598 | orchestrator | 2026-01-10 13:57:40.868612 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 13:57:41.204441 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:41.204544 | orchestrator | 2026-01-10 13:57:41.204578 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-10 13:57:41.257718 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:41.257846 | orchestrator | 2026-01-10 13:57:41.257862 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-10 13:57:41.603782 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:41.603876 | orchestrator | 2026-01-10 13:57:41.603891 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-10 13:57:41.726593 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:41.726687 | orchestrator | 2026-01-10 13:57:41.726703 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-10 13:57:41.726716 | orchestrator | 2026-01-10 13:57:41.726728 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:57:43.363284 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:43.363388 | orchestrator | 2026-01-10 13:57:43.363404 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-10 13:57:43.473126 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-10 13:57:43.473251 | orchestrator | 2026-01-10 13:57:43.473269 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-10 13:57:43.536189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-10 13:57:43.536304 | orchestrator | 2026-01-10 13:57:43.536320 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-10 13:57:44.600798 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-10 13:57:44.600895 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-10 13:57:44.600910 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-10 13:57:44.600923 | orchestrator | 2026-01-10 13:57:44.600936 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-10 13:57:46.305297 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-10 13:57:46.305388 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-10 13:57:46.305400 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-10 13:57:46.305412 | orchestrator | 2026-01-10 13:57:46.305423 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-10 13:57:46.916600 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:57:46.916700 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:46.916716 | orchestrator | 2026-01-10 13:57:46.916728 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-10 13:57:47.552091 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:57:47.552188 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:47.552206 | orchestrator | 2026-01-10 13:57:47.552250 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-10 13:57:47.607897 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:57:47.607992 | orchestrator | 2026-01-10 13:57:47.608008 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-10 13:57:47.951604 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:47.951668 | orchestrator | 2026-01-10 13:57:47.951675 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-10 13:57:48.013679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-10 13:57:48.013757 | orchestrator | 2026-01-10 13:57:48.013769 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-10 13:57:49.046437 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:49.046512 | orchestrator | 2026-01-10 13:57:49.046523 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-10 13:57:49.831183 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:49.831348 | orchestrator | 2026-01-10 13:57:49.831376 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-10 13:58:07.756911 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:07.757019 | orchestrator | 2026-01-10 13:58:07.757031 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-10 13:58:07.814840 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:58:07.814931 | orchestrator | 2026-01-10 13:58:07.814941 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-10 13:58:07.814979 | orchestrator | 2026-01-10 13:58:07.814987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:58:09.556822 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:09.556927 | orchestrator | 2026-01-10 13:58:09.556944 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-10 13:58:09.675389 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-10 13:58:09.675508 | orchestrator | 2026-01-10 13:58:09.675532 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-10 13:58:09.729905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 13:58:09.730063 | orchestrator | 2026-01-10 13:58:09.730083 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-10 13:58:12.345892 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:12.346201 | orchestrator | 2026-01-10 13:58:12.347058 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-10 13:58:12.389720 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:12.389829 | orchestrator | 2026-01-10 13:58:12.389846 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-10 13:58:12.533358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-10 13:58:12.533476 | orchestrator | 2026-01-10 13:58:12.533501 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-10 13:58:15.303828 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-10 13:58:15.303954 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-10 13:58:15.303980 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-10 13:58:15.304003 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-10 13:58:15.304024 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-10 13:58:15.304037 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-10 13:58:15.304048 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-10 13:58:15.304060 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-10 13:58:15.304071 | orchestrator | 2026-01-10 13:58:15.304084 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-10 13:58:15.899723 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:15.899818 | orchestrator | 2026-01-10 13:58:15.899829 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-10 13:58:16.520374 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:16.520495 | orchestrator | 2026-01-10 13:58:16.520511 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-10 13:58:16.595684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-10 13:58:16.595781 | orchestrator | 2026-01-10 13:58:16.595794 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-10 13:58:17.785533 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-10 13:58:17.785664 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-10 13:58:17.785680 | orchestrator | 2026-01-10 13:58:17.785693 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-10 13:58:18.421426 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:18.421547 | orchestrator | 2026-01-10 13:58:18.421565 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-10 13:58:18.474722 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:58:18.474826 | orchestrator | 2026-01-10 13:58:18.474841 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-10 13:58:18.545688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-10 13:58:18.545797 | orchestrator | 2026-01-10 13:58:18.545813 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-10 13:58:19.170819 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:19.170941 | orchestrator | 2026-01-10 13:58:19.170992 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-10 13:58:19.232481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-10 13:58:19.232597 | orchestrator | 2026-01-10 13:58:19.232613 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-10 13:58:20.579875 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:58:20.580010 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:58:20.580026 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:20.580041 | orchestrator | 2026-01-10 13:58:20.580054 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-10 13:58:21.190925 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:21.191043 | orchestrator | 2026-01-10 13:58:21.191061 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-10 13:58:21.244225 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:58:21.244378 | orchestrator | 2026-01-10 13:58:21.244397 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-10 13:58:21.342806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-10 13:58:21.342933 | orchestrator | 2026-01-10 13:58:21.342976 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-10 13:58:21.876850 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:21.876971 | orchestrator | 2026-01-10 13:58:21.876989 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-10 13:58:22.274133 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:22.274289 | orchestrator | 2026-01-10 13:58:22.274310 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-10 13:58:23.509248 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-10 13:58:23.509413 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-10 13:58:23.509431 | orchestrator | 2026-01-10 13:58:23.510279 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-10 13:58:24.139962 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:24.140068 | orchestrator | 2026-01-10 13:58:24.140080 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-10 13:58:24.544892 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:24.545024 | orchestrator | 2026-01-10 13:58:24.545044 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-10 13:58:24.900786 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:24.900910 | orchestrator | 2026-01-10 13:58:24.900935 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-10 13:58:24.947939 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:58:24.948055 | orchestrator | 2026-01-10 13:58:24.948071 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-10 13:58:25.024414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-10 13:58:25.024532 | orchestrator | 2026-01-10 13:58:25.024547 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-10 13:58:25.070815 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:25.070946 | orchestrator | 2026-01-10 13:58:25.070962 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-10 13:58:27.026501 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-10 13:58:27.026644 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-10 13:58:27.026661 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-10 13:58:27.026673 | orchestrator | 2026-01-10 13:58:27.026687 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-10 13:58:27.721305 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:27.721456 | orchestrator | 2026-01-10 13:58:27.721473 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-10 13:58:28.395893 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:28.396020 | orchestrator | 2026-01-10 13:58:28.396040 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-10 13:58:29.091724 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:29.091849 | orchestrator | 2026-01-10 13:58:29.091863 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-10 13:58:29.166957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-10 13:58:29.167067 | orchestrator | 2026-01-10 13:58:29.167080 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-10 13:58:29.208084 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:29.208159 | orchestrator | 2026-01-10 13:58:29.208172 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-10 13:58:29.912815 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-10 13:58:29.912944 | orchestrator | 2026-01-10 13:58:29.912960 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-10 13:58:29.987523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-10 13:58:29.987642 | orchestrator | 2026-01-10 13:58:29.987658 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-10 13:58:30.676046 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:30.676175 | orchestrator | 2026-01-10 13:58:30.676192 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-10 13:58:31.271364 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:31.271487 | orchestrator | 2026-01-10 13:58:31.271505 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-10 13:58:31.324858 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:58:31.324964 | orchestrator | 2026-01-10 13:58:31.324980 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-10 13:58:31.384789 | orchestrator | ok: [testbed-manager] 2026-01-10 13:58:31.384889 | orchestrator | 2026-01-10 13:58:31.384912 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-10 13:58:32.221546 | orchestrator | changed: [testbed-manager] 2026-01-10 13:58:32.221668 | orchestrator | 2026-01-10 13:58:32.221685 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-10 13:59:42.367582 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:42.367726 | orchestrator | 2026-01-10 13:59:42.367744 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-10 13:59:43.362856 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:43.362986 | orchestrator | 2026-01-10 13:59:43.363003 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-10 13:59:43.422212 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:43.422420 | orchestrator | 2026-01-10 13:59:43.422440 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-10 13:59:45.876284 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:45.876470 | orchestrator | 2026-01-10 13:59:45.876512 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-10 13:59:45.953806 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:45.953936 | orchestrator | 2026-01-10 13:59:45.953950 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 13:59:45.953963 | orchestrator | 2026-01-10 13:59:45.953974 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-10 13:59:46.007827 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:46.007947 | orchestrator | 2026-01-10 13:59:46.007960 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-10 14:00:46.060023 | orchestrator | Pausing for 60 seconds 2026-01-10 14:00:46.060139 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:46.060162 | orchestrator | 2026-01-10 14:00:46.060176 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-10 14:00:49.162158 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:49.162261 | orchestrator | 2026-01-10 14:00:49.162277 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-10 14:01:30.662700 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-10 14:01:30.662849 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-10 14:01:30.662866 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:30.662880 | orchestrator | 2026-01-10 14:01:30.662893 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-10 14:01:40.888917 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:40.889051 | orchestrator | 2026-01-10 14:01:40.889070 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-10 14:01:40.962287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-10 14:01:40.962388 | orchestrator | 2026-01-10 14:01:40.962464 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 14:01:40.962479 | orchestrator | 2026-01-10 14:01:40.962490 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-10 14:01:41.016918 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:41.017005 | orchestrator | 2026-01-10 14:01:41.017019 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-10 14:01:41.075905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-10 14:01:41.076011 | orchestrator | 2026-01-10 14:01:41.076029 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-10 14:01:41.863965 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:41.864085 | orchestrator | 2026-01-10 14:01:41.864239 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-10 14:01:44.932821 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:44.932962 | orchestrator | 2026-01-10 14:01:44.932981 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-10 14:01:45.008703 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:01:45.008817 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-10 14:01:45.008835 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-10 14:01:45.008850 | orchestrator | "Checking running containers against expected versions...", 2026-01-10 14:01:45.008863 | orchestrator | "", 2026-01-10 14:01:45.008875 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-10 14:01:45.008887 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-10 14:01:45.008897 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.008909 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-10 14:01:45.008920 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.008931 | orchestrator | "", 2026-01-10 14:01:45.008942 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-10 14:01:45.008954 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-10 14:01:45.008965 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.008976 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-10 14:01:45.008987 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.008998 | orchestrator | "", 2026-01-10 14:01:45.009009 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-10 14:01:45.009020 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-10 14:01:45.009030 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009042 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-10 14:01:45.009053 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009064 | orchestrator | "", 2026-01-10 14:01:45.009075 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-10 14:01:45.009086 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-10 14:01:45.009097 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009108 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-10 14:01:45.009119 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009130 | orchestrator | "", 2026-01-10 14:01:45.009174 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-10 14:01:45.009185 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-10 14:01:45.009196 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009207 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-10 14:01:45.009220 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009232 | orchestrator | "", 2026-01-10 14:01:45.009246 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-10 14:01:45.009259 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009272 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009286 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009298 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009310 | orchestrator | "", 2026-01-10 14:01:45.009323 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-10 14:01:45.009336 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:01:45.009349 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009362 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:01:45.009374 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009388 | orchestrator | "", 2026-01-10 14:01:45.009401 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-10 14:01:45.009414 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:01:45.009449 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009461 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:01:45.009484 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009502 | orchestrator | "", 2026-01-10 14:01:45.009514 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-10 14:01:45.009525 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-10 14:01:45.009536 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009547 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-10 14:01:45.009558 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009569 | orchestrator | "", 2026-01-10 14:01:45.009580 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-10 14:01:45.009591 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:01:45.009602 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009612 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:01:45.009623 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009634 | orchestrator | "", 2026-01-10 14:01:45.009645 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-10 14:01:45.009655 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009666 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009677 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009687 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009698 | orchestrator | "", 2026-01-10 14:01:45.009709 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-10 14:01:45.009720 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009731 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009742 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009752 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009763 | orchestrator | "", 2026-01-10 14:01:45.009774 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-10 14:01:45.009785 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009796 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009807 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009817 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009828 | orchestrator | "", 2026-01-10 14:01:45.009839 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-10 14:01:45.009850 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009869 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009880 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009891 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009901 | orchestrator | "", 2026-01-10 14:01:45.009912 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-10 14:01:45.009943 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009955 | orchestrator | " Enabled: true", 2026-01-10 14:01:45.009966 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:01:45.009977 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:01:45.009988 | orchestrator | "", 2026-01-10 14:01:45.009999 | orchestrator | "=== Summary ===", 2026-01-10 14:01:45.010010 | orchestrator | "Errors (version mismatches): 0", 2026-01-10 14:01:45.010091 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-10 14:01:45.010111 | orchestrator | "", 2026-01-10 14:01:45.010131 | orchestrator | "✅ All running containers match expected versions!" 2026-01-10 14:01:45.010150 | orchestrator | ] 2026-01-10 14:01:45.010168 | orchestrator | } 2026-01-10 14:01:45.010187 | orchestrator | 2026-01-10 14:01:45.010206 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-10 14:01:45.062142 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:45.062268 | orchestrator | 2026-01-10 14:01:45.062283 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:01:45.062297 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-10 14:01:45.062309 | orchestrator | 2026-01-10 14:01:45.157206 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:01:45.157309 | orchestrator | + deactivate 2026-01-10 14:01:45.157324 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-10 14:01:45.157336 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:01:45.157344 | orchestrator | + export PATH 2026-01-10 14:01:45.157353 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-10 14:01:45.157363 | orchestrator | + '[' -n '' ']' 2026-01-10 14:01:45.157379 | orchestrator | + hash -r 2026-01-10 14:01:45.157388 | orchestrator | + '[' -n '' ']' 2026-01-10 14:01:45.157397 | orchestrator | + unset VIRTUAL_ENV 2026-01-10 14:01:45.157405 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-10 14:01:45.157414 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-10 14:01:45.157455 | orchestrator | + unset -f deactivate 2026-01-10 14:01:45.157465 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-10 14:01:45.167086 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:01:45.167126 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:01:45.167136 | orchestrator | + local max_attempts=60 2026-01-10 14:01:45.167145 | orchestrator | + local name=ceph-ansible 2026-01-10 14:01:45.167154 | orchestrator | + local attempt_num=1 2026-01-10 14:01:45.168201 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:01:45.202864 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:01:45.202937 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:01:45.202947 | orchestrator | + local max_attempts=60 2026-01-10 14:01:45.202956 | orchestrator | + local name=kolla-ansible 2026-01-10 14:01:45.202965 | orchestrator | + local attempt_num=1 2026-01-10 14:01:45.203230 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:01:45.232997 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:01:45.233090 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:01:45.233100 | orchestrator | + local max_attempts=60 2026-01-10 14:01:45.233111 | orchestrator | + local name=osism-ansible 2026-01-10 14:01:45.233120 | orchestrator | + local attempt_num=1 2026-01-10 14:01:45.233969 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:01:45.269032 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:01:45.269131 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:01:45.269141 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:01:45.983239 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-10 14:01:46.178886 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-10 14:01:46.179042 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179057 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179069 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-10 14:01:46.179084 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-01-10 14:01:46.179095 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179106 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179117 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2026-01-10 14:01:46.179150 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179162 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-01-10 14:01:46.179173 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179183 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-01-10 14:01:46.179194 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179205 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-10 14:01:46.179216 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.179227 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-01-10 14:01:46.187731 | orchestrator | ++ semver latest 7.0.0 2026-01-10 14:01:46.240740 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:01:46.240834 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:01:46.240851 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-10 14:01:46.245633 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-10 14:01:58.497974 | orchestrator | 2026-01-10 14:01:58 | INFO  | Task f4696583-c9e4-47c4-ba08-22cbf60660da (resolvconf) was prepared for execution. 2026-01-10 14:01:58.498164 | orchestrator | 2026-01-10 14:01:58 | INFO  | It takes a moment until task f4696583-c9e4-47c4-ba08-22cbf60660da (resolvconf) has been started and output is visible here. 2026-01-10 14:02:11.853583 | orchestrator | 2026-01-10 14:02:11.853728 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-10 14:02:11.853746 | orchestrator | 2026-01-10 14:02:11.853759 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:02:11.853772 | orchestrator | Saturday 10 January 2026 14:02:02 +0000 (0:00:00.104) 0:00:00.104 ****** 2026-01-10 14:02:11.853783 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:11.853795 | orchestrator | 2026-01-10 14:02:11.853807 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:02:11.853820 | orchestrator | Saturday 10 January 2026 14:02:05 +0000 (0:00:03.443) 0:00:03.548 ****** 2026-01-10 14:02:11.853831 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:11.853843 | orchestrator | 2026-01-10 14:02:11.853854 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:02:11.853865 | orchestrator | Saturday 10 January 2026 14:02:06 +0000 (0:00:00.065) 0:00:03.613 ****** 2026-01-10 14:02:11.853877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-10 14:02:11.853889 | orchestrator | 2026-01-10 14:02:11.853900 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:02:11.853912 | orchestrator | Saturday 10 January 2026 14:02:06 +0000 (0:00:00.084) 0:00:03.698 ****** 2026-01-10 14:02:11.853923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:02:11.853935 | orchestrator | 2026-01-10 14:02:11.853946 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:02:11.853969 | orchestrator | Saturday 10 January 2026 14:02:06 +0000 (0:00:00.085) 0:00:03.783 ****** 2026-01-10 14:02:11.853981 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:11.853992 | orchestrator | 2026-01-10 14:02:11.854004 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:02:11.854015 | orchestrator | Saturday 10 January 2026 14:02:07 +0000 (0:00:01.075) 0:00:04.859 ****** 2026-01-10 14:02:11.854083 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:11.854097 | orchestrator | 2026-01-10 14:02:11.854110 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:02:11.854155 | orchestrator | Saturday 10 January 2026 14:02:07 +0000 (0:00:00.065) 0:00:04.925 ****** 2026-01-10 14:02:11.854169 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:11.854182 | orchestrator | 2026-01-10 14:02:11.854194 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:02:11.854206 | orchestrator | Saturday 10 January 2026 14:02:07 +0000 (0:00:00.509) 0:00:05.434 ****** 2026-01-10 14:02:11.854219 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:11.854232 | orchestrator | 2026-01-10 14:02:11.854253 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:02:11.854274 | orchestrator | Saturday 10 January 2026 14:02:07 +0000 (0:00:00.078) 0:00:05.512 ****** 2026-01-10 14:02:11.854295 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:11.854314 | orchestrator | 2026-01-10 14:02:11.854333 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:02:11.854354 | orchestrator | Saturday 10 January 2026 14:02:08 +0000 (0:00:00.527) 0:00:06.040 ****** 2026-01-10 14:02:11.854376 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:11.854396 | orchestrator | 2026-01-10 14:02:11.854412 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:02:11.854423 | orchestrator | Saturday 10 January 2026 14:02:09 +0000 (0:00:01.049) 0:00:07.090 ****** 2026-01-10 14:02:11.854460 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:11.854472 | orchestrator | 2026-01-10 14:02:11.854513 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:02:11.854524 | orchestrator | Saturday 10 January 2026 14:02:10 +0000 (0:00:00.955) 0:00:08.045 ****** 2026-01-10 14:02:11.854536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-10 14:02:11.854547 | orchestrator | 2026-01-10 14:02:11.854558 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:02:11.854569 | orchestrator | Saturday 10 January 2026 14:02:10 +0000 (0:00:00.071) 0:00:08.117 ****** 2026-01-10 14:02:11.854580 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:11.854591 | orchestrator | 2026-01-10 14:02:11.854602 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:02:11.854614 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:02:11.854626 | orchestrator | 2026-01-10 14:02:11.854637 | orchestrator | 2026-01-10 14:02:11.854648 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:02:11.854659 | orchestrator | Saturday 10 January 2026 14:02:11 +0000 (0:00:01.122) 0:00:09.240 ****** 2026-01-10 14:02:11.854669 | orchestrator | =============================================================================== 2026-01-10 14:02:11.854680 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2026-01-10 14:02:11.854691 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-01-10 14:02:11.854702 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2026-01-10 14:02:11.854713 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2026-01-10 14:02:11.854723 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2026-01-10 14:02:11.854734 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-01-10 14:02:11.854765 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-01-10 14:02:11.854777 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-10 14:02:11.854788 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-10 14:02:11.854799 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-10 14:02:11.854810 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-01-10 14:02:11.854821 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-10 14:02:11.854832 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-10 14:02:12.118242 | orchestrator | + osism apply sshconfig 2026-01-10 14:02:24.261354 | orchestrator | 2026-01-10 14:02:24 | INFO  | Task 49cc0bf0-b6cd-4ae3-bca8-72818123c980 (sshconfig) was prepared for execution. 2026-01-10 14:02:24.261511 | orchestrator | 2026-01-10 14:02:24 | INFO  | It takes a moment until task 49cc0bf0-b6cd-4ae3-bca8-72818123c980 (sshconfig) has been started and output is visible here. 2026-01-10 14:02:36.082696 | orchestrator | 2026-01-10 14:02:36.082881 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-10 14:02:36.082913 | orchestrator | 2026-01-10 14:02:36.082935 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-10 14:02:36.082954 | orchestrator | Saturday 10 January 2026 14:02:28 +0000 (0:00:00.162) 0:00:00.162 ****** 2026-01-10 14:02:36.082973 | orchestrator | ok: [testbed-manager] 2026-01-10 14:02:36.082993 | orchestrator | 2026-01-10 14:02:36.083011 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-10 14:02:36.083032 | orchestrator | Saturday 10 January 2026 14:02:28 +0000 (0:00:00.563) 0:00:00.726 ****** 2026-01-10 14:02:36.083051 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:36.083110 | orchestrator | 2026-01-10 14:02:36.083122 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-10 14:02:36.083134 | orchestrator | Saturday 10 January 2026 14:02:29 +0000 (0:00:00.528) 0:00:01.255 ****** 2026-01-10 14:02:36.083145 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:02:36.083157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:02:36.083168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:02:36.083181 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:02:36.083194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:02:36.083207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:02:36.083220 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:02:36.083232 | orchestrator | 2026-01-10 14:02:36.083246 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-10 14:02:36.083259 | orchestrator | Saturday 10 January 2026 14:02:35 +0000 (0:00:05.654) 0:00:06.909 ****** 2026-01-10 14:02:36.083272 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:02:36.083283 | orchestrator | 2026-01-10 14:02:36.083294 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-10 14:02:36.083305 | orchestrator | Saturday 10 January 2026 14:02:35 +0000 (0:00:00.077) 0:00:06.987 ****** 2026-01-10 14:02:36.083316 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:36.083327 | orchestrator | 2026-01-10 14:02:36.083338 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:02:36.083351 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:02:36.083363 | orchestrator | 2026-01-10 14:02:36.083374 | orchestrator | 2026-01-10 14:02:36.083385 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:02:36.083396 | orchestrator | Saturday 10 January 2026 14:02:35 +0000 (0:00:00.582) 0:00:07.569 ****** 2026-01-10 14:02:36.083408 | orchestrator | =============================================================================== 2026-01-10 14:02:36.083418 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.65s 2026-01-10 14:02:36.083430 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-01-10 14:02:36.083466 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-01-10 14:02:36.083479 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-01-10 14:02:36.083490 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-10 14:02:36.381332 | orchestrator | + osism apply known-hosts 2026-01-10 14:02:48.560883 | orchestrator | 2026-01-10 14:02:48 | INFO  | Task b1e8eeca-1dae-49ea-ae9e-c0cb537c46a4 (known-hosts) was prepared for execution. 2026-01-10 14:02:48.561031 | orchestrator | 2026-01-10 14:02:48 | INFO  | It takes a moment until task b1e8eeca-1dae-49ea-ae9e-c0cb537c46a4 (known-hosts) has been started and output is visible here. 2026-01-10 14:03:05.119894 | orchestrator | 2026-01-10 14:03:05.120076 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-10 14:03:05.120108 | orchestrator | 2026-01-10 14:03:05.120121 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-10 14:03:05.120134 | orchestrator | Saturday 10 January 2026 14:02:52 +0000 (0:00:00.160) 0:00:00.160 ****** 2026-01-10 14:03:05.120147 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:03:05.120159 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:03:05.120170 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:03:05.120181 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:03:05.120192 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:03:05.120203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:03:05.120245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:03:05.120257 | orchestrator | 2026-01-10 14:03:05.120268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-10 14:03:05.120281 | orchestrator | Saturday 10 January 2026 14:02:58 +0000 (0:00:05.900) 0:00:06.061 ****** 2026-01-10 14:03:05.120294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:03:05.120309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:03:05.120335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:03:05.120348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:03:05.120362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:03:05.120375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:03:05.120388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:03:05.120400 | orchestrator | 2026-01-10 14:03:05.120413 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120426 | orchestrator | Saturday 10 January 2026 14:02:58 +0000 (0:00:00.158) 0:00:06.219 ****** 2026-01-10 14:03:05.120440 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1kQ4AkS2t2HzvT6PXegA1QFV8h6041n4omcxmwzH7m6WQr0Mrj8M+tdi9r14HLnI1Bfd1NGsJe7fWThKeuiUQ=) 2026-01-10 14:03:05.120488 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrRKOMyggTMYV0cfWRd9jy8+eoNYKcggsGZGs4sxgjyaLoBr/qgTxCZtyIoAb4xjUQpzhRQk+HfjPvOgDYOscLCBtZfTFtaxa+Cw3puSRm/jDAa7Xr+Gw7v+U18qASAqwELKoASgMA0Tx2u31R26twcdwitKX4VT3QqkDvQiDZ51AZdagCXOv+Nzuuw+pHW8VIgmfqV//A6UfOjwUmwIZjsYmsrk81U1l/E0jppFhboJK34iotYFqFGZwtWNWgwo+Ba3ZeGHVGCm7yknmbS65KjUjk336OIatqZOe5YuLu40nZO7kfVnJhxBrd+P/V9Y9sm15gDkGBgEjwF2E22IPc94xYLc4nPRZCFX0xAbxpsLuCR645oECtWkbFdru0imNcYszPxnDBYqQhrJ6WJxWtPEHe17K0CC/a6KnYpvj1ioFL2fERPN5AciIh3taTdXTJ4y6LaG+o7HP7Rc0+0oX8DHrQftoRWRcYhZOmtoW1GgOE16ZDNaPJ3WIxMsEybZM=) 2026-01-10 14:03:05.120512 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuHpAZ05BfMnU+RnOjEyuzsv14cgvYWsuKFZMYyQIwu) 2026-01-10 14:03:05.120527 | orchestrator | 2026-01-10 14:03:05.120540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120553 | orchestrator | Saturday 10 January 2026 14:02:59 +0000 (0:00:01.135) 0:00:07.355 ****** 2026-01-10 14:03:05.120589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJBasKNv3QJA/fvngqe+FRuxIMCn7aZWYbFLMfAT5914flgdpzlweqgRs8e3W1M3Rp5VndhqTDzUquywK7TJxzv5AfaNxNHdXFv6RRe3/NnNWqdcvP3II65hcv+U2R79tu7p1z2ul+3v9A6Xu/1kiPQqHs/nzswTHYArfS0urcPk739c3RMEjQoFnjU7ParbpjiyBqfzaEo12fd9D6HI007+wPjrDj2KK+NU2EMhqxvBn4DHok7+0jXaRoo8sJHQEDrJbu7nVBxx/OpzPphRhgJ9Cu3IgthTToLjIXKbecERuUU7+j5/D4fOL+ZN5mxozV12YqXwaVfj9jpHTCh6Ov11SZnc+qp7CAFAinxD4alp9kj64oi0rgCREYKWRNN2yaDQzZNQNqS29YBj7dz/+o0XSQVnX4TfSOizFYcOhJh/v6Ov+EUY9Qc+7PdFWj3e6SiMCOeT2ThI6AoYv3kqLDbjGxV9VgQThqONo371N+5v6Yv2ZZkygzd/RSH6gJ6ZM=) 2026-01-10 14:03:05.120613 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDDgvqCVJrc8JN9YrlAHjndCFbwftuRcNDum1PSUR+cG) 2026-01-10 14:03:05.120624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJteysz+dXhXFsnBkdTjAhRPyJZ3cz9ISh6RVns/qkVQnbT/jbi0tLrux6MpgS8XzqvMtmnkxQjk/i6J0uBKx4k=) 2026-01-10 14:03:05.120635 | orchestrator | 2026-01-10 14:03:05.120646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120657 | orchestrator | Saturday 10 January 2026 14:03:01 +0000 (0:00:01.019) 0:00:08.375 ****** 2026-01-10 14:03:05.120668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+eXNgMBZYX09WbuYPBQMxyJSFOL1B+n6W2gCgC8DF2ohwIMPYR8ijCmgxCmUh392+kKnl+aenTSZGm20B2lQw=) 2026-01-10 14:03:05.120680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5LdTQ2r2K3kBnyHB+DRRLtSgk46yL+BXqINzQ83s4xU3NAq7xSbBKTj2FmoocTTqsBQtvzzl/ql5E6ncuQDJMLmzGoSyzQKYgaH6a+g9UnnBLvMkkb2FCQUJPzNsqVTvlW5k5tEIfwWHbvtJJPZNMaUCPSAlm/gTaPWgYp3OWpiRkMtAvnAIqZGddry8TA7BYuGNMInZlhrSq9bqT9FYxmaJvtfHH9JV//PfdYlDFFbitSyNBgMN53sDiu9ntnVkewZQJId5jEQVJKCEtZU5n2DGvlpM/LhXrZJNVaDi1KElo7GjrD8eoW3S12JbpY/aOX2XkD2G2bRvbIoDG1dQ8OJnZSV4zLni6MQmagOp2O55P+HyMJTzKvg1Ci2IeG1tyr4KtGwgjvNtwQaC2Wx8nc0OiTbdMIM3Fy0kTzYKtQLSjjUZTRk/ZyJfoTwxXuWGIHfwNcD38ahAVGFhmWVD56UxGtjf85/x3EMRcyEuJgtGCXFyW3uUa+gcqCscRCsc=) 2026-01-10 14:03:05.120692 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1OBN71HH7hmvn9SEO2jz6q/rzMBu+dDZM3+MLaqxx7) 2026-01-10 14:03:05.120703 | orchestrator | 2026-01-10 14:03:05.120714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120725 | orchestrator | Saturday 10 January 2026 14:03:02 +0000 (0:00:01.025) 0:00:09.400 ****** 2026-01-10 14:03:05.120813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy715/ud2IsEM5SGtPsiVDXEsZJd2656ixTWsgEbhG+WHMCTlk6wVUyNVjmwvrOF0rQWcShELQpXQPZS0dlsWJUsnQWi6lRSSv1gwSwbgkPR6ZLcQHwsaxFOkCCLfOybTescRN9QReu3fe2j4BfFj7WzJxCyeREMmhrXFmoxW8Xeep0O0lcsNewmpW+a5zGkOkieFMOXmYRgRvkGSWamM4WulSGTA/q3prKhnuMKGZTebi3udSL0MIZjBUOfewJDxy6OxQpChkDCTU92olF+IhR2giO7X9k/zSziW6XoKs/xkAh7W6N0QgXlR26gJ98jYcuJebQFTJhJ/59PCi8lUeP/ocIy0DZjdHOtrCk+DMyj7OHP3MXs7V5wOCZS6O9OTv/XQe89DfG3KI6dih7lFg7GQRDnuyT3E9o24O3tEFaRYghZkpkYDdQH/O7phJaaqQugQ0n13DoQF8jS+dfKBXZJSeRU6vpLs6nvXWEoxpeaGcujaLpG0emlSLa87DCgk=) 2026-01-10 14:03:05.120826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCwarkrjA0Si4xnv1xK4zm+3wRzTTE+gXcQpsCWhJOE) 2026-01-10 14:03:05.120837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQxjkBVl/Ibt1GSmQqW6dxQoA8hVJSgPm/b6wHgqvUzABmhG2Wmos4k9Z3yR7uQmKvd0XYSAkOL1NIbjf3B2VA=) 2026-01-10 14:03:05.120848 | orchestrator | 2026-01-10 14:03:05.120859 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120870 | orchestrator | Saturday 10 January 2026 14:03:03 +0000 (0:00:01.013) 0:00:10.413 ****** 2026-01-10 14:03:05.120881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF1oPJVyhDtDnT0QFQRQpo8GYkYI3ryBWDwnyWO5zmg/olxczqCh+eijUkHjkbHZ/b1RbYUALmNcReAl1a3NCX8=) 2026-01-10 14:03:05.120892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHb1p6uEkOxSOg6eaprSbZBE6iC2KhJvpPR5ycXCED3Y) 2026-01-10 14:03:05.120903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU/gEcLRDlvrcL7ESAyZ9qhDEluPcKlWkljBp6mETEEjjn6Ii+C/dPqt1vyvp+NaFRC+L/07zDsqA3hw+nwMfK/is+VIXC6gJCDQ3qvjSXPZr1Xe3Bujvcxk46Y/Oy4QiSDHCxGjZ1pwLTM0tgSt7shzejfpo1r/gaiM3/MoRUgZtAH5fSrEQssFarByyyfKunlAMUnZ6JDaqXnZvFZonUNcLZ5YJs7x/ixN8C30jAvhhQgodxZpCXDgln6EniqL/DuT0Qf+ZgGsNwBRNxEHvfGghW7VLyrNgT+dUChcSG5azrWVueKAxUqnET2Tb/PH72T8kqs6/9Xj+Gk0n6MP5Zrzy9R0p8qf8TUCNm6P7TXFckCOeIEc6lT88qOLG14lNG8LBENKY1LPmozOpEu0/RVmj3+j2GHi5JBookX3Y83rCsfZh4yMf4NPh+pV1ttm4thw9o91ysrNc4NDrPZWSV4nWUN4CPX0UrW8V4UjRGbXuhpX2ZmXB/pR/aG0f9vLc=) 2026-01-10 14:03:05.120922 | orchestrator | 2026-01-10 14:03:05.120934 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:05.120944 | orchestrator | Saturday 10 January 2026 14:03:04 +0000 (0:00:01.013) 0:00:11.427 ****** 2026-01-10 14:03:05.120964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7dbZx8jNRd01EmMQ3oKDVWQW+O+gUVwt93ljEAP8vJhMMS0O/AUSKm4+mVo0cSGmvYL6hB5mYV37YI3bMnsm2U5dyoeD4eGxM8Ie+bH1bz0ehtDIUo6r0rb89737ZYdwxU3xgeGBIr2XyNE07mhktiXTICnuVGI6UNRtqKA54u2ECjsmdcyMhECG6ellNSW+ZVO3Bk+qVvW9MwVI/+K0maj/junEfFjQTAaxLTmht9zpc1n5JOqAgqum9g3NT4nGXtM/UmYceRu/P2+YknxsPY3seSwN6YAU9GYUDpw66o6laxHSefqyyU5Qf8Qrgj4LE431poJQVKX+ubvJsSHqh3sQhknzm5z4cRUsp3TEF+0EbL4KjCJZzl9LGblo6pAgCaTdA6ZHEJFrMqUpUfdYoPQWOwg0Inh0hVSEw30fBQulGtUqawbGQ5aEm70Vh8QNOde3JUlTVh6WkcGSFc4PWSuyaI7FslkCKUo3T1xa0X8oeZini+Cugz+uV3SVsOV8=) 2026-01-10 14:03:15.675044 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHlI0W11nF1OfWxdOb/db7YpG4qWQzT6TjddJOMWMVX9sA6q90Oa3h6kpMNb8me7LfahEGlO8oVIzwK5CF6Fa0=) 2026-01-10 14:03:15.675188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZpGd1bLsFXy8Q/WneOXqabF5oaqLNSWc8tiA3RHogJ) 2026-01-10 14:03:15.675206 | orchestrator | 2026-01-10 14:03:15.675219 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:15.675232 | orchestrator | Saturday 10 January 2026 14:03:05 +0000 (0:00:01.040) 0:00:12.468 ****** 2026-01-10 14:03:15.675245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXFZvW6SCfb7nXRgaWbv5OvSOrHg85jRVMBUsy6A9FNoTqqNUPeSbb0bXkLACY+pJEJCeSSAt8sCAXSXllRBo1A04+pPjGPb4ElvRy94iew9BWpKaO+gUWe3BCA5IjBeVrBGevwcofyvisiafYioy7nciG+24ngw3yxqXewESq6B6+RDS1TXYjPQhUQ7xqFv41yCmwmueiIYzQ5TUbxL1YyI55euHO8GMyZojLrEw6Zbctfngr66UDNIA+7aYq/JKLf8jNAuBQoLduX4YhizXMHhhFqRy3Hy5HxKfbyNLYgnH8viFeunXGixm8pbda+yBoLCZuGPmfCALhxvla4hfmlHaH5CNIczbe0f/Ab/l4HuAXiKhCC0GXYFATghj+ukIOtgBXghBV39nk+yMt/TdXyzMhoUL271RNIHxjml8DQZw3/4chgNaIQill0TidGKzZM/i/KS46tMXbsV/TymUzldVuTo1fGlzAevxuy+MLqOfDzrZWCWyfZRC8EXvD1FM=) 2026-01-10 14:03:15.675260 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmuptb56+fcoHN9Hr19es0uOUv2eylHO/mGYTjkoMyHu6OiPnN/n75lMHWfU0dvvl6mDjrp4JZvS8vRXAxUzuE=) 2026-01-10 14:03:15.675272 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3fhCpnGWQGTNTN5ZE57HPbbn8sjnOe42+wRdz8LD1p) 2026-01-10 14:03:15.675283 | orchestrator | 2026-01-10 14:03:15.675295 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-10 14:03:15.675307 | orchestrator | Saturday 10 January 2026 14:03:06 +0000 (0:00:01.061) 0:00:13.529 ****** 2026-01-10 14:03:15.675319 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:03:15.675331 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:03:15.675343 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:03:15.675353 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:03:15.675364 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:03:15.675375 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:03:15.675386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:03:15.675397 | orchestrator | 2026-01-10 14:03:15.675408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-10 14:03:15.675420 | orchestrator | Saturday 10 January 2026 14:03:11 +0000 (0:00:05.160) 0:00:18.690 ****** 2026-01-10 14:03:15.675489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:03:15.675505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:03:15.675516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:03:15.675547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:03:15.675560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:03:15.675572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:03:15.675585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:03:15.675597 | orchestrator | 2026-01-10 14:03:15.675615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:15.675635 | orchestrator | Saturday 10 January 2026 14:03:11 +0000 (0:00:00.161) 0:00:18.851 ****** 2026-01-10 14:03:15.675654 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuHpAZ05BfMnU+RnOjEyuzsv14cgvYWsuKFZMYyQIwu) 2026-01-10 14:03:15.675708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrRKOMyggTMYV0cfWRd9jy8+eoNYKcggsGZGs4sxgjyaLoBr/qgTxCZtyIoAb4xjUQpzhRQk+HfjPvOgDYOscLCBtZfTFtaxa+Cw3puSRm/jDAa7Xr+Gw7v+U18qASAqwELKoASgMA0Tx2u31R26twcdwitKX4VT3QqkDvQiDZ51AZdagCXOv+Nzuuw+pHW8VIgmfqV//A6UfOjwUmwIZjsYmsrk81U1l/E0jppFhboJK34iotYFqFGZwtWNWgwo+Ba3ZeGHVGCm7yknmbS65KjUjk336OIatqZOe5YuLu40nZO7kfVnJhxBrd+P/V9Y9sm15gDkGBgEjwF2E22IPc94xYLc4nPRZCFX0xAbxpsLuCR645oECtWkbFdru0imNcYszPxnDBYqQhrJ6WJxWtPEHe17K0CC/a6KnYpvj1ioFL2fERPN5AciIh3taTdXTJ4y6LaG+o7HP7Rc0+0oX8DHrQftoRWRcYhZOmtoW1GgOE16ZDNaPJ3WIxMsEybZM=) 2026-01-10 14:03:15.675729 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1kQ4AkS2t2HzvT6PXegA1QFV8h6041n4omcxmwzH7m6WQr0Mrj8M+tdi9r14HLnI1Bfd1NGsJe7fWThKeuiUQ=) 2026-01-10 14:03:15.675746 | orchestrator | 2026-01-10 14:03:15.675765 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:15.675786 | orchestrator | Saturday 10 January 2026 14:03:12 +0000 (0:00:01.043) 0:00:19.895 ****** 2026-01-10 14:03:15.675807 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJBasKNv3QJA/fvngqe+FRuxIMCn7aZWYbFLMfAT5914flgdpzlweqgRs8e3W1M3Rp5VndhqTDzUquywK7TJxzv5AfaNxNHdXFv6RRe3/NnNWqdcvP3II65hcv+U2R79tu7p1z2ul+3v9A6Xu/1kiPQqHs/nzswTHYArfS0urcPk739c3RMEjQoFnjU7ParbpjiyBqfzaEo12fd9D6HI007+wPjrDj2KK+NU2EMhqxvBn4DHok7+0jXaRoo8sJHQEDrJbu7nVBxx/OpzPphRhgJ9Cu3IgthTToLjIXKbecERuUU7+j5/D4fOL+ZN5mxozV12YqXwaVfj9jpHTCh6Ov11SZnc+qp7CAFAinxD4alp9kj64oi0rgCREYKWRNN2yaDQzZNQNqS29YBj7dz/+o0XSQVnX4TfSOizFYcOhJh/v6Ov+EUY9Qc+7PdFWj3e6SiMCOeT2ThI6AoYv3kqLDbjGxV9VgQThqONo371N+5v6Yv2ZZkygzd/RSH6gJ6ZM=) 2026-01-10 14:03:15.675826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJteysz+dXhXFsnBkdTjAhRPyJZ3cz9ISh6RVns/qkVQnbT/jbi0tLrux6MpgS8XzqvMtmnkxQjk/i6J0uBKx4k=) 2026-01-10 14:03:15.675845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDDgvqCVJrc8JN9YrlAHjndCFbwftuRcNDum1PSUR+cG) 2026-01-10 14:03:15.675879 | orchestrator | 2026-01-10 14:03:15.675901 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:15.675920 | orchestrator | Saturday 10 January 2026 14:03:13 +0000 (0:00:01.056) 0:00:20.952 ****** 2026-01-10 14:03:15.675939 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+eXNgMBZYX09WbuYPBQMxyJSFOL1B+n6W2gCgC8DF2ohwIMPYR8ijCmgxCmUh392+kKnl+aenTSZGm20B2lQw=) 2026-01-10 14:03:15.675959 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1OBN71HH7hmvn9SEO2jz6q/rzMBu+dDZM3+MLaqxx7) 2026-01-10 14:03:15.675979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5LdTQ2r2K3kBnyHB+DRRLtSgk46yL+BXqINzQ83s4xU3NAq7xSbBKTj2FmoocTTqsBQtvzzl/ql5E6ncuQDJMLmzGoSyzQKYgaH6a+g9UnnBLvMkkb2FCQUJPzNsqVTvlW5k5tEIfwWHbvtJJPZNMaUCPSAlm/gTaPWgYp3OWpiRkMtAvnAIqZGddry8TA7BYuGNMInZlhrSq9bqT9FYxmaJvtfHH9JV//PfdYlDFFbitSyNBgMN53sDiu9ntnVkewZQJId5jEQVJKCEtZU5n2DGvlpM/LhXrZJNVaDi1KElo7GjrD8eoW3S12JbpY/aOX2XkD2G2bRvbIoDG1dQ8OJnZSV4zLni6MQmagOp2O55P+HyMJTzKvg1Ci2IeG1tyr4KtGwgjvNtwQaC2Wx8nc0OiTbdMIM3Fy0kTzYKtQLSjjUZTRk/ZyJfoTwxXuWGIHfwNcD38ahAVGFhmWVD56UxGtjf85/x3EMRcyEuJgtGCXFyW3uUa+gcqCscRCsc=) 2026-01-10 14:03:15.676000 | orchestrator | 2026-01-10 14:03:15.676020 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:15.676039 | orchestrator | Saturday 10 January 2026 14:03:14 +0000 (0:00:01.022) 0:00:21.975 ****** 2026-01-10 14:03:15.676051 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQxjkBVl/Ibt1GSmQqW6dxQoA8hVJSgPm/b6wHgqvUzABmhG2Wmos4k9Z3yR7uQmKvd0XYSAkOL1NIbjf3B2VA=) 2026-01-10 14:03:15.676063 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy715/ud2IsEM5SGtPsiVDXEsZJd2656ixTWsgEbhG+WHMCTlk6wVUyNVjmwvrOF0rQWcShELQpXQPZS0dlsWJUsnQWi6lRSSv1gwSwbgkPR6ZLcQHwsaxFOkCCLfOybTescRN9QReu3fe2j4BfFj7WzJxCyeREMmhrXFmoxW8Xeep0O0lcsNewmpW+a5zGkOkieFMOXmYRgRvkGSWamM4WulSGTA/q3prKhnuMKGZTebi3udSL0MIZjBUOfewJDxy6OxQpChkDCTU92olF+IhR2giO7X9k/zSziW6XoKs/xkAh7W6N0QgXlR26gJ98jYcuJebQFTJhJ/59PCi8lUeP/ocIy0DZjdHOtrCk+DMyj7OHP3MXs7V5wOCZS6O9OTv/XQe89DfG3KI6dih7lFg7GQRDnuyT3E9o24O3tEFaRYghZkpkYDdQH/O7phJaaqQugQ0n13DoQF8jS+dfKBXZJSeRU6vpLs6nvXWEoxpeaGcujaLpG0emlSLa87DCgk=) 2026-01-10 14:03:15.676090 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCwarkrjA0Si4xnv1xK4zm+3wRzTTE+gXcQpsCWhJOE) 2026-01-10 14:03:19.960640 | orchestrator | 2026-01-10 14:03:19.960772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:19.960789 | orchestrator | Saturday 10 January 2026 14:03:15 +0000 (0:00:01.045) 0:00:23.020 ****** 2026-01-10 14:03:19.960801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHb1p6uEkOxSOg6eaprSbZBE6iC2KhJvpPR5ycXCED3Y) 2026-01-10 14:03:19.960848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU/gEcLRDlvrcL7ESAyZ9qhDEluPcKlWkljBp6mETEEjjn6Ii+C/dPqt1vyvp+NaFRC+L/07zDsqA3hw+nwMfK/is+VIXC6gJCDQ3qvjSXPZr1Xe3Bujvcxk46Y/Oy4QiSDHCxGjZ1pwLTM0tgSt7shzejfpo1r/gaiM3/MoRUgZtAH5fSrEQssFarByyyfKunlAMUnZ6JDaqXnZvFZonUNcLZ5YJs7x/ixN8C30jAvhhQgodxZpCXDgln6EniqL/DuT0Qf+ZgGsNwBRNxEHvfGghW7VLyrNgT+dUChcSG5azrWVueKAxUqnET2Tb/PH72T8kqs6/9Xj+Gk0n6MP5Zrzy9R0p8qf8TUCNm6P7TXFckCOeIEc6lT88qOLG14lNG8LBENKY1LPmozOpEu0/RVmj3+j2GHi5JBookX3Y83rCsfZh4yMf4NPh+pV1ttm4thw9o91ysrNc4NDrPZWSV4nWUN4CPX0UrW8V4UjRGbXuhpX2ZmXB/pR/aG0f9vLc=) 2026-01-10 14:03:19.960865 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF1oPJVyhDtDnT0QFQRQpo8GYkYI3ryBWDwnyWO5zmg/olxczqCh+eijUkHjkbHZ/b1RbYUALmNcReAl1a3NCX8=) 2026-01-10 14:03:19.960909 | orchestrator | 2026-01-10 14:03:19.960921 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:19.960932 | orchestrator | Saturday 10 January 2026 14:03:16 +0000 (0:00:01.020) 0:00:24.041 ****** 2026-01-10 14:03:19.960943 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7dbZx8jNRd01EmMQ3oKDVWQW+O+gUVwt93ljEAP8vJhMMS0O/AUSKm4+mVo0cSGmvYL6hB5mYV37YI3bMnsm2U5dyoeD4eGxM8Ie+bH1bz0ehtDIUo6r0rb89737ZYdwxU3xgeGBIr2XyNE07mhktiXTICnuVGI6UNRtqKA54u2ECjsmdcyMhECG6ellNSW+ZVO3Bk+qVvW9MwVI/+K0maj/junEfFjQTAaxLTmht9zpc1n5JOqAgqum9g3NT4nGXtM/UmYceRu/P2+YknxsPY3seSwN6YAU9GYUDpw66o6laxHSefqyyU5Qf8Qrgj4LE431poJQVKX+ubvJsSHqh3sQhknzm5z4cRUsp3TEF+0EbL4KjCJZzl9LGblo6pAgCaTdA6ZHEJFrMqUpUfdYoPQWOwg0Inh0hVSEw30fBQulGtUqawbGQ5aEm70Vh8QNOde3JUlTVh6WkcGSFc4PWSuyaI7FslkCKUo3T1xa0X8oeZini+Cugz+uV3SVsOV8=) 2026-01-10 14:03:19.960955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHlI0W11nF1OfWxdOb/db7YpG4qWQzT6TjddJOMWMVX9sA6q90Oa3h6kpMNb8me7LfahEGlO8oVIzwK5CF6Fa0=) 2026-01-10 14:03:19.960967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZpGd1bLsFXy8Q/WneOXqabF5oaqLNSWc8tiA3RHogJ) 2026-01-10 14:03:19.960978 | orchestrator | 2026-01-10 14:03:19.960989 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:03:19.961004 | orchestrator | Saturday 10 January 2026 14:03:17 +0000 (0:00:01.076) 0:00:25.117 ****** 2026-01-10 14:03:19.961025 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXFZvW6SCfb7nXRgaWbv5OvSOrHg85jRVMBUsy6A9FNoTqqNUPeSbb0bXkLACY+pJEJCeSSAt8sCAXSXllRBo1A04+pPjGPb4ElvRy94iew9BWpKaO+gUWe3BCA5IjBeVrBGevwcofyvisiafYioy7nciG+24ngw3yxqXewESq6B6+RDS1TXYjPQhUQ7xqFv41yCmwmueiIYzQ5TUbxL1YyI55euHO8GMyZojLrEw6Zbctfngr66UDNIA+7aYq/JKLf8jNAuBQoLduX4YhizXMHhhFqRy3Hy5HxKfbyNLYgnH8viFeunXGixm8pbda+yBoLCZuGPmfCALhxvla4hfmlHaH5CNIczbe0f/Ab/l4HuAXiKhCC0GXYFATghj+ukIOtgBXghBV39nk+yMt/TdXyzMhoUL271RNIHxjml8DQZw3/4chgNaIQill0TidGKzZM/i/KS46tMXbsV/TymUzldVuTo1fGlzAevxuy+MLqOfDzrZWCWyfZRC8EXvD1FM=) 2026-01-10 14:03:19.961045 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmuptb56+fcoHN9Hr19es0uOUv2eylHO/mGYTjkoMyHu6OiPnN/n75lMHWfU0dvvl6mDjrp4JZvS8vRXAxUzuE=) 2026-01-10 14:03:19.961063 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3fhCpnGWQGTNTN5ZE57HPbbn8sjnOe42+wRdz8LD1p) 2026-01-10 14:03:19.961081 | orchestrator | 2026-01-10 14:03:19.961102 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-10 14:03:19.961119 | orchestrator | Saturday 10 January 2026 14:03:18 +0000 (0:00:01.034) 0:00:26.151 ****** 2026-01-10 14:03:19.961133 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:03:19.961146 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:03:19.961159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:03:19.961171 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:03:19.961184 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:03:19.961197 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:03:19.961210 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:03:19.961222 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:03:19.961236 | orchestrator | 2026-01-10 14:03:19.961270 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-10 14:03:19.961284 | orchestrator | Saturday 10 January 2026 14:03:18 +0000 (0:00:00.148) 0:00:26.300 ****** 2026-01-10 14:03:19.961297 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:03:19.961310 | orchestrator | 2026-01-10 14:03:19.961322 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-10 14:03:19.961344 | orchestrator | Saturday 10 January 2026 14:03:19 +0000 (0:00:00.060) 0:00:26.361 ****** 2026-01-10 14:03:19.961357 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:03:19.961369 | orchestrator | 2026-01-10 14:03:19.961382 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-10 14:03:19.961395 | orchestrator | Saturday 10 January 2026 14:03:19 +0000 (0:00:00.053) 0:00:26.414 ****** 2026-01-10 14:03:19.961407 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:19.961419 | orchestrator | 2026-01-10 14:03:19.961432 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:03:19.961445 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:03:19.961458 | orchestrator | 2026-01-10 14:03:19.961509 | orchestrator | 2026-01-10 14:03:19.961528 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:03:19.961547 | orchestrator | Saturday 10 January 2026 14:03:19 +0000 (0:00:00.715) 0:00:27.130 ****** 2026-01-10 14:03:19.961567 | orchestrator | =============================================================================== 2026-01-10 14:03:19.961583 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.90s 2026-01-10 14:03:19.961595 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2026-01-10 14:03:19.961607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-10 14:03:19.961618 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-10 14:03:19.961629 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-10 14:03:19.961640 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-10 14:03:19.961650 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-10 14:03:19.961661 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-10 14:03:19.961672 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-10 14:03:19.961683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-10 14:03:19.961694 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-10 14:03:19.961704 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-10 14:03:19.961715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-10 14:03:19.961726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-10 14:03:19.961737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-10 14:03:19.961748 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-10 14:03:19.961758 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.72s 2026-01-10 14:03:19.961769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-01-10 14:03:19.961780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-10 14:03:19.961791 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-01-10 14:03:20.243835 | orchestrator | + osism apply squid 2026-01-10 14:03:32.377439 | orchestrator | 2026-01-10 14:03:32 | INFO  | Task 96784dab-442a-4836-a2bf-3794cf24ecf0 (squid) was prepared for execution. 2026-01-10 14:03:32.377678 | orchestrator | 2026-01-10 14:03:32 | INFO  | It takes a moment until task 96784dab-442a-4836-a2bf-3794cf24ecf0 (squid) has been started and output is visible here. 2026-01-10 14:05:25.211820 | orchestrator | 2026-01-10 14:05:25.211941 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-10 14:05:25.211957 | orchestrator | 2026-01-10 14:05:25.211970 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-10 14:05:25.212012 | orchestrator | Saturday 10 January 2026 14:03:36 +0000 (0:00:00.140) 0:00:00.140 ****** 2026-01-10 14:05:25.212043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:05:25.212056 | orchestrator | 2026-01-10 14:05:25.212067 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-10 14:05:25.212078 | orchestrator | Saturday 10 January 2026 14:03:36 +0000 (0:00:00.075) 0:00:00.216 ****** 2026-01-10 14:05:25.212090 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:25.212102 | orchestrator | 2026-01-10 14:05:25.212113 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-10 14:05:25.212124 | orchestrator | Saturday 10 January 2026 14:03:37 +0000 (0:00:01.122) 0:00:01.339 ****** 2026-01-10 14:05:25.212136 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-10 14:05:25.212146 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-10 14:05:25.212158 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-10 14:05:25.212169 | orchestrator | 2026-01-10 14:05:25.212180 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-10 14:05:25.212191 | orchestrator | Saturday 10 January 2026 14:03:38 +0000 (0:00:01.009) 0:00:02.348 ****** 2026-01-10 14:05:25.212202 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-10 14:05:25.212213 | orchestrator | 2026-01-10 14:05:25.212224 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-10 14:05:25.212235 | orchestrator | Saturday 10 January 2026 14:03:39 +0000 (0:00:01.028) 0:00:03.376 ****** 2026-01-10 14:05:25.212246 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:25.212257 | orchestrator | 2026-01-10 14:05:25.212268 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-10 14:05:25.212279 | orchestrator | Saturday 10 January 2026 14:03:39 +0000 (0:00:00.349) 0:00:03.726 ****** 2026-01-10 14:05:25.212290 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:25.212301 | orchestrator | 2026-01-10 14:05:25.212312 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-10 14:05:25.212328 | orchestrator | Saturday 10 January 2026 14:03:40 +0000 (0:00:00.887) 0:00:04.614 ****** 2026-01-10 14:05:25.212339 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-10 14:05:25.212354 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:25.212367 | orchestrator | 2026-01-10 14:05:25.212379 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-10 14:05:25.212392 | orchestrator | Saturday 10 January 2026 14:04:12 +0000 (0:00:31.470) 0:00:36.084 ****** 2026-01-10 14:05:25.212405 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:25.212418 | orchestrator | 2026-01-10 14:05:25.212431 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-10 14:05:25.212443 | orchestrator | Saturday 10 January 2026 14:04:24 +0000 (0:00:11.941) 0:00:48.026 ****** 2026-01-10 14:05:25.212456 | orchestrator | Pausing for 60 seconds 2026-01-10 14:05:25.212469 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:25.212482 | orchestrator | 2026-01-10 14:05:25.212495 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-10 14:05:25.212544 | orchestrator | Saturday 10 January 2026 14:05:24 +0000 (0:01:00.076) 0:01:48.103 ****** 2026-01-10 14:05:25.212564 | orchestrator | ok: [testbed-manager] 2026-01-10 14:05:25.212585 | orchestrator | 2026-01-10 14:05:25.212604 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-10 14:05:25.212621 | orchestrator | Saturday 10 January 2026 14:05:24 +0000 (0:00:00.070) 0:01:48.173 ****** 2026-01-10 14:05:25.212634 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:25.212646 | orchestrator | 2026-01-10 14:05:25.212659 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:05:25.212672 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:05:25.212694 | orchestrator | 2026-01-10 14:05:25.212707 | orchestrator | 2026-01-10 14:05:25.212719 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:05:25.212731 | orchestrator | Saturday 10 January 2026 14:05:24 +0000 (0:00:00.578) 0:01:48.752 ****** 2026-01-10 14:05:25.212749 | orchestrator | =============================================================================== 2026-01-10 14:05:25.212767 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-10 14:05:25.212786 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.47s 2026-01-10 14:05:25.212804 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.94s 2026-01-10 14:05:25.212820 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.12s 2026-01-10 14:05:25.212847 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2026-01-10 14:05:25.212867 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2026-01-10 14:05:25.212885 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2026-01-10 14:05:25.212903 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-01-10 14:05:25.212915 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-01-10 14:05:25.212926 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-10 14:05:25.212936 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-10 14:05:25.469936 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:05:25.470109 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-10 14:05:25.477342 | orchestrator | + set -e 2026-01-10 14:05:25.477387 | orchestrator | + NAMESPACE=kolla 2026-01-10 14:05:25.477402 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-10 14:05:25.482857 | orchestrator | ++ semver latest 9.0.0 2026-01-10 14:05:25.539475 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-10 14:05:25.539591 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:05:25.540233 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-10 14:05:37.600945 | orchestrator | 2026-01-10 14:05:37 | INFO  | Task 2ca8fce1-4d27-414a-aa88-2b9edf570822 (operator) was prepared for execution. 2026-01-10 14:05:37.601088 | orchestrator | 2026-01-10 14:05:37 | INFO  | It takes a moment until task 2ca8fce1-4d27-414a-aa88-2b9edf570822 (operator) has been started and output is visible here. 2026-01-10 14:05:52.539637 | orchestrator | 2026-01-10 14:05:52.539763 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-10 14:05:52.539779 | orchestrator | 2026-01-10 14:05:52.539791 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:05:52.539803 | orchestrator | Saturday 10 January 2026 14:05:41 +0000 (0:00:00.103) 0:00:00.103 ****** 2026-01-10 14:05:52.539814 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:05:52.539826 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:05:52.539837 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:05:52.539848 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:05:52.539859 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:05:52.539870 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:05:52.539881 | orchestrator | 2026-01-10 14:05:52.539892 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-10 14:05:52.539907 | orchestrator | Saturday 10 January 2026 14:05:44 +0000 (0:00:03.117) 0:00:03.221 ****** 2026-01-10 14:05:52.539919 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:05:52.539930 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:05:52.539940 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:05:52.539951 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:05:52.539962 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:05:52.539973 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:05:52.539984 | orchestrator | 2026-01-10 14:05:52.540015 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-10 14:05:52.540026 | orchestrator | 2026-01-10 14:05:52.540037 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 14:05:52.540048 | orchestrator | Saturday 10 January 2026 14:05:45 +0000 (0:00:00.767) 0:00:03.988 ****** 2026-01-10 14:05:52.540059 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:05:52.540070 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:05:52.540081 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:05:52.540091 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:05:52.540102 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:05:52.540112 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:05:52.540123 | orchestrator | 2026-01-10 14:05:52.540134 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 14:05:52.540147 | orchestrator | Saturday 10 January 2026 14:05:45 +0000 (0:00:00.137) 0:00:04.126 ****** 2026-01-10 14:05:52.540161 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:05:52.540173 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:05:52.540186 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:05:52.540199 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:05:52.540211 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:05:52.540223 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:05:52.540236 | orchestrator | 2026-01-10 14:05:52.540248 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 14:05:52.540261 | orchestrator | Saturday 10 January 2026 14:05:45 +0000 (0:00:00.131) 0:00:04.258 ****** 2026-01-10 14:05:52.540273 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:52.540286 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:52.540302 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:52.540319 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:52.540332 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:52.540344 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:52.540355 | orchestrator | 2026-01-10 14:05:52.540366 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 14:05:52.540377 | orchestrator | Saturday 10 January 2026 14:05:46 +0000 (0:00:00.635) 0:00:04.893 ****** 2026-01-10 14:05:52.540388 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:52.540399 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:52.540410 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:52.540420 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:52.540431 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:52.540442 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:52.540453 | orchestrator | 2026-01-10 14:05:52.540464 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 14:05:52.540475 | orchestrator | Saturday 10 January 2026 14:05:46 +0000 (0:00:00.766) 0:00:05.660 ****** 2026-01-10 14:05:52.540486 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-10 14:05:52.540497 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-10 14:05:52.540533 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-10 14:05:52.540545 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-10 14:05:52.540556 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-10 14:05:52.540567 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-10 14:05:52.540578 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-10 14:05:52.540588 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-10 14:05:52.540599 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-10 14:05:52.540610 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-10 14:05:52.540621 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-10 14:05:52.540643 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-10 14:05:52.540654 | orchestrator | 2026-01-10 14:05:52.540665 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 14:05:52.540676 | orchestrator | Saturday 10 January 2026 14:05:48 +0000 (0:00:01.171) 0:00:06.831 ****** 2026-01-10 14:05:52.540695 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:52.540705 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:52.540716 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:52.540727 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:52.540737 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:52.540748 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:52.540759 | orchestrator | 2026-01-10 14:05:52.540770 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 14:05:52.540782 | orchestrator | Saturday 10 January 2026 14:05:49 +0000 (0:00:01.211) 0:00:08.042 ****** 2026-01-10 14:05:52.540792 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-10 14:05:52.540803 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-10 14:05:52.540814 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-10 14:05:52.540825 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540852 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540864 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540875 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540886 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540896 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:05:52.540907 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540918 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540929 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540939 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540950 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540961 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-10 14:05:52.540974 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.540992 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.541003 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.541013 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.541024 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.541039 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:05:52.541050 | orchestrator | 2026-01-10 14:05:52.541061 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 14:05:52.541072 | orchestrator | Saturday 10 January 2026 14:05:50 +0000 (0:00:01.199) 0:00:09.242 ****** 2026-01-10 14:05:52.541083 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:52.541094 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:52.541104 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:52.541114 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:52.541125 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:52.541136 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:52.541146 | orchestrator | 2026-01-10 14:05:52.541157 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 14:05:52.541168 | orchestrator | Saturday 10 January 2026 14:05:50 +0000 (0:00:00.160) 0:00:09.403 ****** 2026-01-10 14:05:52.541178 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:52.541189 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:52.541200 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:52.541210 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:52.541221 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:52.541232 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:52.541250 | orchestrator | 2026-01-10 14:05:52.541261 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 14:05:52.541272 | orchestrator | Saturday 10 January 2026 14:05:50 +0000 (0:00:00.166) 0:00:09.569 ****** 2026-01-10 14:05:52.541282 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:52.541293 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:52.541304 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:52.541314 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:52.541325 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:52.541335 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:52.541346 | orchestrator | 2026-01-10 14:05:52.541356 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 14:05:52.541367 | orchestrator | Saturday 10 January 2026 14:05:51 +0000 (0:00:00.554) 0:00:10.124 ****** 2026-01-10 14:05:52.541377 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:52.541388 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:52.541399 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:52.541409 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:52.541420 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:52.541430 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:52.541440 | orchestrator | 2026-01-10 14:05:52.541451 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 14:05:52.541462 | orchestrator | Saturday 10 January 2026 14:05:51 +0000 (0:00:00.195) 0:00:10.319 ****** 2026-01-10 14:05:52.541473 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:05:52.541483 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:05:52.541494 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:52.541505 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:52.541569 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:05:52.541581 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:05:52.541592 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:52.541603 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:52.541613 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:05:52.541624 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:52.541634 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:05:52.541645 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:52.541655 | orchestrator | 2026-01-10 14:05:52.541666 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 14:05:52.541677 | orchestrator | Saturday 10 January 2026 14:05:52 +0000 (0:00:00.695) 0:00:11.015 ****** 2026-01-10 14:05:52.541688 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:52.541699 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:52.541709 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:52.541720 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:52.541731 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:52.541741 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:52.541752 | orchestrator | 2026-01-10 14:05:52.541763 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 14:05:52.541774 | orchestrator | Saturday 10 January 2026 14:05:52 +0000 (0:00:00.171) 0:00:11.187 ****** 2026-01-10 14:05:52.541784 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:52.541795 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:52.541806 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:52.541817 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:52.541835 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:54.012250 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:54.012380 | orchestrator | 2026-01-10 14:05:54.012398 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 14:05:54.012412 | orchestrator | Saturday 10 January 2026 14:05:52 +0000 (0:00:00.166) 0:00:11.353 ****** 2026-01-10 14:05:54.012423 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:54.012434 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:54.012480 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:54.012491 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:54.012502 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:54.012562 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:54.012573 | orchestrator | 2026-01-10 14:05:54.012585 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 14:05:54.012596 | orchestrator | Saturday 10 January 2026 14:05:52 +0000 (0:00:00.175) 0:00:11.528 ****** 2026-01-10 14:05:54.012607 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:05:54.012618 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:05:54.012629 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:05:54.012640 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:05:54.012651 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:05:54.012661 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:05:54.012672 | orchestrator | 2026-01-10 14:05:54.012683 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 14:05:54.012694 | orchestrator | Saturday 10 January 2026 14:05:53 +0000 (0:00:00.679) 0:00:12.208 ****** 2026-01-10 14:05:54.012705 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:05:54.012715 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:05:54.012726 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:05:54.012737 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:05:54.012748 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:05:54.012760 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:05:54.012773 | orchestrator | 2026-01-10 14:05:54.012786 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:05:54.012799 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012814 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012827 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012839 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012851 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012864 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:05:54.012876 | orchestrator | 2026-01-10 14:05:54.012888 | orchestrator | 2026-01-10 14:05:54.012901 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:05:54.012914 | orchestrator | Saturday 10 January 2026 14:05:53 +0000 (0:00:00.268) 0:00:12.476 ****** 2026-01-10 14:05:54.012928 | orchestrator | =============================================================================== 2026-01-10 14:05:54.012941 | orchestrator | Gathering Facts --------------------------------------------------------- 3.12s 2026-01-10 14:05:54.012952 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2026-01-10 14:05:54.012963 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-01-10 14:05:54.012975 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2026-01-10 14:05:54.012986 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-01-10 14:05:54.012996 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-01-10 14:05:54.013007 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-01-10 14:05:54.013018 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-01-10 14:05:54.013037 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-01-10 14:05:54.013071 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2026-01-10 14:05:54.013083 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-01-10 14:05:54.013093 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-01-10 14:05:54.013104 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-01-10 14:05:54.013115 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-01-10 14:05:54.013126 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-01-10 14:05:54.013137 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-01-10 14:05:54.013148 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-01-10 14:05:54.013159 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-01-10 14:05:54.013170 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-01-10 14:05:54.292667 | orchestrator | + osism apply --environment custom facts 2026-01-10 14:05:56.243657 | orchestrator | 2026-01-10 14:05:56 | INFO  | Trying to run play facts in environment custom 2026-01-10 14:06:06.322447 | orchestrator | 2026-01-10 14:06:06 | INFO  | Task c63b655a-993f-4806-b7a2-a30931debd6d (facts) was prepared for execution. 2026-01-10 14:06:06.322637 | orchestrator | 2026-01-10 14:06:06 | INFO  | It takes a moment until task c63b655a-993f-4806-b7a2-a30931debd6d (facts) has been started and output is visible here. 2026-01-10 14:06:48.791031 | orchestrator | 2026-01-10 14:06:48.791175 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-10 14:06:48.791192 | orchestrator | 2026-01-10 14:06:48.791205 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:06:48.791217 | orchestrator | Saturday 10 January 2026 14:06:10 +0000 (0:00:00.080) 0:00:00.080 ****** 2026-01-10 14:06:48.791228 | orchestrator | ok: [testbed-manager] 2026-01-10 14:06:48.791241 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:06:48.791253 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.791264 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:06:48.791275 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:06:48.791286 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.791297 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.791308 | orchestrator | 2026-01-10 14:06:48.791319 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-10 14:06:48.791351 | orchestrator | Saturday 10 January 2026 14:06:11 +0000 (0:00:01.376) 0:00:01.457 ****** 2026-01-10 14:06:48.791362 | orchestrator | ok: [testbed-manager] 2026-01-10 14:06:48.791373 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:06:48.791384 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.791395 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:06:48.791406 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:06:48.791416 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.791427 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.791439 | orchestrator | 2026-01-10 14:06:48.791450 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-10 14:06:48.791461 | orchestrator | 2026-01-10 14:06:48.791472 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:06:48.791483 | orchestrator | Saturday 10 January 2026 14:06:12 +0000 (0:00:01.252) 0:00:02.709 ****** 2026-01-10 14:06:48.791494 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.791505 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.791516 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.791554 | orchestrator | 2026-01-10 14:06:48.791567 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:06:48.791609 | orchestrator | Saturday 10 January 2026 14:06:13 +0000 (0:00:00.119) 0:00:02.829 ****** 2026-01-10 14:06:48.791623 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.791636 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.791648 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.791661 | orchestrator | 2026-01-10 14:06:48.791673 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:06:48.791686 | orchestrator | Saturday 10 January 2026 14:06:13 +0000 (0:00:00.209) 0:00:03.039 ****** 2026-01-10 14:06:48.791699 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.791712 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.791724 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.791737 | orchestrator | 2026-01-10 14:06:48.791750 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:06:48.791763 | orchestrator | Saturday 10 January 2026 14:06:13 +0000 (0:00:00.205) 0:00:03.244 ****** 2026-01-10 14:06:48.791778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:06:48.791793 | orchestrator | 2026-01-10 14:06:48.791806 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:06:48.791818 | orchestrator | Saturday 10 January 2026 14:06:13 +0000 (0:00:00.155) 0:00:03.400 ****** 2026-01-10 14:06:48.791830 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.791843 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.791855 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.791868 | orchestrator | 2026-01-10 14:06:48.791880 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:06:48.791892 | orchestrator | Saturday 10 January 2026 14:06:14 +0000 (0:00:00.469) 0:00:03.869 ****** 2026-01-10 14:06:48.791903 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:06:48.791913 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:06:48.791924 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:06:48.791935 | orchestrator | 2026-01-10 14:06:48.791946 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:06:48.791957 | orchestrator | Saturday 10 January 2026 14:06:14 +0000 (0:00:00.130) 0:00:03.999 ****** 2026-01-10 14:06:48.791968 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.791978 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.791989 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.792000 | orchestrator | 2026-01-10 14:06:48.792011 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:06:48.792022 | orchestrator | Saturday 10 January 2026 14:06:15 +0000 (0:00:01.061) 0:00:05.061 ****** 2026-01-10 14:06:48.792032 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.792043 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.792054 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.792065 | orchestrator | 2026-01-10 14:06:48.792076 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:06:48.792087 | orchestrator | Saturday 10 January 2026 14:06:15 +0000 (0:00:00.454) 0:00:05.515 ****** 2026-01-10 14:06:48.792097 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.792109 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.792119 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.792130 | orchestrator | 2026-01-10 14:06:48.792141 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:06:48.792152 | orchestrator | Saturday 10 January 2026 14:06:16 +0000 (0:00:01.095) 0:00:06.611 ****** 2026-01-10 14:06:48.792163 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.792173 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.792184 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.792195 | orchestrator | 2026-01-10 14:06:48.792205 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-10 14:06:48.792216 | orchestrator | Saturday 10 January 2026 14:06:32 +0000 (0:00:15.860) 0:00:22.471 ****** 2026-01-10 14:06:48.792237 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:06:48.792248 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:06:48.792259 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:06:48.792269 | orchestrator | 2026-01-10 14:06:48.792280 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-10 14:06:48.792310 | orchestrator | Saturday 10 January 2026 14:06:32 +0000 (0:00:00.092) 0:00:22.564 ****** 2026-01-10 14:06:48.792322 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:06:48.792333 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:06:48.792344 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:06:48.792354 | orchestrator | 2026-01-10 14:06:48.792365 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:06:48.792376 | orchestrator | Saturday 10 January 2026 14:06:40 +0000 (0:00:07.385) 0:00:29.949 ****** 2026-01-10 14:06:48.792387 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.792398 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.792409 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.792419 | orchestrator | 2026-01-10 14:06:48.792430 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 14:06:48.792441 | orchestrator | Saturday 10 January 2026 14:06:40 +0000 (0:00:00.432) 0:00:30.382 ****** 2026-01-10 14:06:48.792452 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-10 14:06:48.792464 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-10 14:06:48.792474 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-10 14:06:48.792486 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-10 14:06:48.792497 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-10 14:06:48.792508 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-10 14:06:48.792519 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-10 14:06:48.792549 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-10 14:06:48.792560 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-10 14:06:48.792570 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:06:48.792581 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:06:48.792592 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:06:48.792603 | orchestrator | 2026-01-10 14:06:48.792614 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:06:48.792625 | orchestrator | Saturday 10 January 2026 14:06:43 +0000 (0:00:03.349) 0:00:33.732 ****** 2026-01-10 14:06:48.792636 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.792647 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.792657 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.792668 | orchestrator | 2026-01-10 14:06:48.792679 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:06:48.792690 | orchestrator | 2026-01-10 14:06:48.792701 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:06:48.792712 | orchestrator | Saturday 10 January 2026 14:06:45 +0000 (0:00:01.260) 0:00:34.993 ****** 2026-01-10 14:06:48.792723 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:06:48.792733 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:06:48.792744 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:06:48.792755 | orchestrator | ok: [testbed-manager] 2026-01-10 14:06:48.792766 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:06:48.792776 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:06:48.792787 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:06:48.792798 | orchestrator | 2026-01-10 14:06:48.792809 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:06:48.792821 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:06:48.792841 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:06:48.792853 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:06:48.792864 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:06:48.792875 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:06:48.792887 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:06:48.792897 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:06:48.792908 | orchestrator | 2026-01-10 14:06:48.792919 | orchestrator | 2026-01-10 14:06:48.792930 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:06:48.792941 | orchestrator | Saturday 10 January 2026 14:06:48 +0000 (0:00:03.579) 0:00:38.572 ****** 2026-01-10 14:06:48.792952 | orchestrator | =============================================================================== 2026-01-10 14:06:48.792963 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.86s 2026-01-10 14:06:48.792974 | orchestrator | Install required packages (Debian) -------------------------------------- 7.39s 2026-01-10 14:06:48.792985 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.58s 2026-01-10 14:06:48.792996 | orchestrator | Copy fact files --------------------------------------------------------- 3.35s 2026-01-10 14:06:48.793007 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-01-10 14:06:48.793017 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.26s 2026-01-10 14:06:48.793083 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2026-01-10 14:06:49.005690 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-01-10 14:06:49.005812 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-01-10 14:06:49.005826 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-01-10 14:06:49.005837 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-01-10 14:06:49.005849 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-01-10 14:06:49.005860 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-10 14:06:49.005872 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-01-10 14:06:49.005908 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-10 14:06:49.005920 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-01-10 14:06:49.005932 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-01-10 14:06:49.005942 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-01-10 14:06:49.269440 | orchestrator | + osism apply bootstrap 2026-01-10 14:07:01.391409 | orchestrator | 2026-01-10 14:07:01 | INFO  | Task d8d73819-aea6-4024-a586-0c8ff534db64 (bootstrap) was prepared for execution. 2026-01-10 14:07:01.391612 | orchestrator | 2026-01-10 14:07:01 | INFO  | It takes a moment until task d8d73819-aea6-4024-a586-0c8ff534db64 (bootstrap) has been started and output is visible here. 2026-01-10 14:07:17.168623 | orchestrator | 2026-01-10 14:07:17.168775 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:07:17.168821 | orchestrator | 2026-01-10 14:07:17.168835 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:07:17.168847 | orchestrator | Saturday 10 January 2026 14:07:05 +0000 (0:00:00.150) 0:00:00.150 ****** 2026-01-10 14:07:17.168858 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:17.168871 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:17.168882 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:17.168893 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:17.168903 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:17.168914 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:17.168925 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:17.168936 | orchestrator | 2026-01-10 14:07:17.168947 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:07:17.168957 | orchestrator | 2026-01-10 14:07:17.168969 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:07:17.168980 | orchestrator | Saturday 10 January 2026 14:07:05 +0000 (0:00:00.234) 0:00:00.385 ****** 2026-01-10 14:07:17.168991 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:17.169001 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:17.169012 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:17.169023 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:17.169033 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:17.169045 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:17.169056 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:17.169066 | orchestrator | 2026-01-10 14:07:17.169077 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-10 14:07:17.169089 | orchestrator | 2026-01-10 14:07:17.169102 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:07:17.169114 | orchestrator | Saturday 10 January 2026 14:07:09 +0000 (0:00:03.728) 0:00:04.114 ****** 2026-01-10 14:07:17.169128 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:07:17.169140 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:07:17.169153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-10 14:07:17.169166 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:07:17.169179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:07:17.169192 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:07:17.169204 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:07:17.169217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-10 14:07:17.169230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:07:17.169242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-10 14:07:17.169254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-10 14:07:17.169267 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:07:17.169280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:07:17.169292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-10 14:07:17.169305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-10 14:07:17.169317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-10 14:07:17.169330 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:07:17.169343 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:17.169355 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-10 14:07:17.169367 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-10 14:07:17.169379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-10 14:07:17.169392 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-10 14:07:17.169404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:07:17.169417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-10 14:07:17.169437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-10 14:07:17.169450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-10 14:07:17.169461 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-10 14:07:17.169472 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-10 14:07:17.169483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-10 14:07:17.169493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:07:17.169504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-10 14:07:17.169515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:07:17.169545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-10 14:07:17.169556 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-10 14:07:17.169567 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:07:17.169597 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:07:17.169608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-10 14:07:17.169619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:07:17.169630 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:17.169641 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:07:17.169652 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:17.169663 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:07:17.169674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:07:17.169685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:07:17.169696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:07:17.169707 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:07:17.169738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:07:17.169750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:07:17.169760 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:17.169771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:07:17.169782 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:07:17.169793 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:17.169803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:07:17.169814 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:17.169825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:07:17.169835 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:17.169846 | orchestrator | 2026-01-10 14:07:17.169857 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-10 14:07:17.169868 | orchestrator | 2026-01-10 14:07:17.169879 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-10 14:07:17.169890 | orchestrator | Saturday 10 January 2026 14:07:09 +0000 (0:00:00.395) 0:00:04.509 ****** 2026-01-10 14:07:17.169900 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:17.169911 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:17.169922 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:17.169933 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:17.169944 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:17.169955 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:17.169965 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:17.169976 | orchestrator | 2026-01-10 14:07:17.169987 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-10 14:07:17.169998 | orchestrator | Saturday 10 January 2026 14:07:11 +0000 (0:00:01.211) 0:00:05.721 ****** 2026-01-10 14:07:17.170009 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:17.170085 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:17.170096 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:17.170107 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:17.170130 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:17.170141 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:17.170152 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:17.170163 | orchestrator | 2026-01-10 14:07:17.170174 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-10 14:07:17.170185 | orchestrator | Saturday 10 January 2026 14:07:12 +0000 (0:00:01.174) 0:00:06.895 ****** 2026-01-10 14:07:17.170197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:17.170211 | orchestrator | 2026-01-10 14:07:17.170222 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-10 14:07:17.170233 | orchestrator | Saturday 10 January 2026 14:07:12 +0000 (0:00:00.289) 0:00:07.185 ****** 2026-01-10 14:07:17.170244 | orchestrator | changed: [testbed-manager] 2026-01-10 14:07:17.170254 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:17.170265 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:17.170276 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:17.170287 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:17.170297 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:17.170308 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:17.170318 | orchestrator | 2026-01-10 14:07:17.170329 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-10 14:07:17.170340 | orchestrator | Saturday 10 January 2026 14:07:14 +0000 (0:00:02.016) 0:00:09.201 ****** 2026-01-10 14:07:17.170351 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:17.170363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:17.170376 | orchestrator | 2026-01-10 14:07:17.170387 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-10 14:07:17.170398 | orchestrator | Saturday 10 January 2026 14:07:14 +0000 (0:00:00.271) 0:00:09.473 ****** 2026-01-10 14:07:17.170409 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:17.170420 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:17.170430 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:17.170441 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:17.170451 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:17.170462 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:17.170473 | orchestrator | 2026-01-10 14:07:17.170484 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-10 14:07:17.170494 | orchestrator | Saturday 10 January 2026 14:07:15 +0000 (0:00:01.020) 0:00:10.494 ****** 2026-01-10 14:07:17.170505 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:17.170516 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:17.170546 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:17.170557 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:17.170568 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:17.170579 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:17.170590 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:17.170600 | orchestrator | 2026-01-10 14:07:17.170612 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-10 14:07:17.170623 | orchestrator | Saturday 10 January 2026 14:07:16 +0000 (0:00:00.630) 0:00:11.124 ****** 2026-01-10 14:07:17.170634 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:17.170644 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:17.170655 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:17.170666 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:17.170677 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:17.170688 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:17.170699 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:17.170717 | orchestrator | 2026-01-10 14:07:17.170728 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:07:17.170740 | orchestrator | Saturday 10 January 2026 14:07:17 +0000 (0:00:00.435) 0:00:11.560 ****** 2026-01-10 14:07:17.170751 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:17.170762 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:17.170779 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:28.660450 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:28.660590 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:28.660606 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:28.660617 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:28.660627 | orchestrator | 2026-01-10 14:07:28.660639 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:07:28.660651 | orchestrator | Saturday 10 January 2026 14:07:17 +0000 (0:00:00.222) 0:00:11.782 ****** 2026-01-10 14:07:28.660663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:28.660686 | orchestrator | 2026-01-10 14:07:28.660697 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:07:28.660708 | orchestrator | Saturday 10 January 2026 14:07:17 +0000 (0:00:00.269) 0:00:12.052 ****** 2026-01-10 14:07:28.660718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:28.660728 | orchestrator | 2026-01-10 14:07:28.660738 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:07:28.660757 | orchestrator | Saturday 10 January 2026 14:07:17 +0000 (0:00:00.292) 0:00:12.344 ****** 2026-01-10 14:07:28.660767 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.660778 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.660787 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.660797 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.660807 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.660816 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.660826 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.660835 | orchestrator | 2026-01-10 14:07:28.660845 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:07:28.660855 | orchestrator | Saturday 10 January 2026 14:07:19 +0000 (0:00:01.304) 0:00:13.648 ****** 2026-01-10 14:07:28.660864 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:28.660874 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:28.660884 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:28.660894 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:28.660904 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:28.660914 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:28.660924 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:28.660934 | orchestrator | 2026-01-10 14:07:28.660943 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:07:28.660953 | orchestrator | Saturday 10 January 2026 14:07:19 +0000 (0:00:00.199) 0:00:13.848 ****** 2026-01-10 14:07:28.660963 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.660975 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.660986 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.660997 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.661008 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.661020 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.661031 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.661042 | orchestrator | 2026-01-10 14:07:28.661053 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:07:28.661064 | orchestrator | Saturday 10 January 2026 14:07:19 +0000 (0:00:00.594) 0:00:14.442 ****** 2026-01-10 14:07:28.661092 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:28.661105 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:28.661116 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:28.661127 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:28.661139 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:28.661149 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:28.661159 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:28.661168 | orchestrator | 2026-01-10 14:07:28.661178 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:07:28.661189 | orchestrator | Saturday 10 January 2026 14:07:20 +0000 (0:00:00.309) 0:00:14.752 ****** 2026-01-10 14:07:28.661198 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661208 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:28.661217 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:28.661227 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:28.661237 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:28.661246 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:28.661256 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:28.661265 | orchestrator | 2026-01-10 14:07:28.661275 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:07:28.661285 | orchestrator | Saturday 10 January 2026 14:07:20 +0000 (0:00:00.530) 0:00:15.283 ****** 2026-01-10 14:07:28.661294 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661304 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:28.661313 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:28.661323 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:28.661336 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:28.661346 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:28.661355 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:28.661365 | orchestrator | 2026-01-10 14:07:28.661375 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:07:28.661384 | orchestrator | Saturday 10 January 2026 14:07:21 +0000 (0:00:01.055) 0:00:16.338 ****** 2026-01-10 14:07:28.661394 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661403 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.661413 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.661422 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.661432 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.661441 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.661451 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.661461 | orchestrator | 2026-01-10 14:07:28.661471 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:07:28.661480 | orchestrator | Saturday 10 January 2026 14:07:22 +0000 (0:00:01.000) 0:00:17.338 ****** 2026-01-10 14:07:28.661506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:28.661516 | orchestrator | 2026-01-10 14:07:28.661546 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:07:28.661557 | orchestrator | Saturday 10 January 2026 14:07:23 +0000 (0:00:00.268) 0:00:17.607 ****** 2026-01-10 14:07:28.661567 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:28.661577 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:28.661586 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:28.661596 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:07:28.661605 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:28.661615 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:07:28.661625 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:07:28.661634 | orchestrator | 2026-01-10 14:07:28.661644 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:07:28.661654 | orchestrator | Saturday 10 January 2026 14:07:24 +0000 (0:00:01.223) 0:00:18.831 ****** 2026-01-10 14:07:28.661670 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661680 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.661690 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.661699 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.661709 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.661719 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.661728 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.661738 | orchestrator | 2026-01-10 14:07:28.661748 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:07:28.661757 | orchestrator | Saturday 10 January 2026 14:07:24 +0000 (0:00:00.211) 0:00:19.042 ****** 2026-01-10 14:07:28.661767 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661777 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.661786 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.661796 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.661805 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.661815 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.661825 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.661834 | orchestrator | 2026-01-10 14:07:28.661844 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:07:28.661854 | orchestrator | Saturday 10 January 2026 14:07:24 +0000 (0:00:00.226) 0:00:19.268 ****** 2026-01-10 14:07:28.661864 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.661873 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.661883 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.661892 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.661902 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.661911 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.661921 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.661931 | orchestrator | 2026-01-10 14:07:28.661940 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:07:28.661950 | orchestrator | Saturday 10 January 2026 14:07:24 +0000 (0:00:00.224) 0:00:19.492 ****** 2026-01-10 14:07:28.661961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:07:28.661972 | orchestrator | 2026-01-10 14:07:28.661982 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:07:28.661992 | orchestrator | Saturday 10 January 2026 14:07:25 +0000 (0:00:00.269) 0:00:19.762 ****** 2026-01-10 14:07:28.662001 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.662011 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.662094 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.662105 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.662114 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.662124 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.662133 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.662143 | orchestrator | 2026-01-10 14:07:28.662153 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:07:28.662163 | orchestrator | Saturday 10 January 2026 14:07:25 +0000 (0:00:00.514) 0:00:20.276 ****** 2026-01-10 14:07:28.662172 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:07:28.662182 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:07:28.662192 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:07:28.662201 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:07:28.662211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:07:28.662220 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:07:28.662230 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:07:28.662240 | orchestrator | 2026-01-10 14:07:28.662249 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:07:28.662259 | orchestrator | Saturday 10 January 2026 14:07:25 +0000 (0:00:00.198) 0:00:20.475 ****** 2026-01-10 14:07:28.662268 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.662278 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.662296 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.662306 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.662316 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:07:28.662325 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:07:28.662339 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:07:28.662349 | orchestrator | 2026-01-10 14:07:28.662359 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:07:28.662369 | orchestrator | Saturday 10 January 2026 14:07:26 +0000 (0:00:01.009) 0:00:21.485 ****** 2026-01-10 14:07:28.662378 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.662388 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.662397 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.662407 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.662417 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:07:28.662426 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:07:28.662436 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:07:28.662445 | orchestrator | 2026-01-10 14:07:28.662455 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:07:28.662465 | orchestrator | Saturday 10 January 2026 14:07:27 +0000 (0:00:00.556) 0:00:22.041 ****** 2026-01-10 14:07:28.662475 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:28.662484 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:07:28.662494 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:07:28.662503 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:07:28.662520 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.224309 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.224455 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.224474 | orchestrator | 2026-01-10 14:08:07.224488 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:08:07.224501 | orchestrator | Saturday 10 January 2026 14:07:28 +0000 (0:00:01.144) 0:00:23.185 ****** 2026-01-10 14:08:07.224512 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.224523 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.224580 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.224591 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:07.224602 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.224614 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.224625 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.224636 | orchestrator | 2026-01-10 14:08:07.224647 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-10 14:08:07.224659 | orchestrator | Saturday 10 January 2026 14:07:44 +0000 (0:00:15.731) 0:00:38.917 ****** 2026-01-10 14:08:07.224670 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.224681 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.224692 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.224703 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.224714 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.224725 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.224736 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.224747 | orchestrator | 2026-01-10 14:08:07.224758 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-10 14:08:07.224769 | orchestrator | Saturday 10 January 2026 14:07:44 +0000 (0:00:00.197) 0:00:39.114 ****** 2026-01-10 14:08:07.224780 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.224791 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.224802 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.224813 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.224824 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.224834 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.224845 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.224856 | orchestrator | 2026-01-10 14:08:07.224867 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-10 14:08:07.224878 | orchestrator | Saturday 10 January 2026 14:07:44 +0000 (0:00:00.230) 0:00:39.344 ****** 2026-01-10 14:08:07.224889 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.224900 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.224936 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.224948 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.224958 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.224969 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.224980 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.224991 | orchestrator | 2026-01-10 14:08:07.225002 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-10 14:08:07.225013 | orchestrator | Saturday 10 January 2026 14:07:45 +0000 (0:00:00.219) 0:00:39.564 ****** 2026-01-10 14:08:07.225025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:08:07.225039 | orchestrator | 2026-01-10 14:08:07.225050 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-10 14:08:07.225061 | orchestrator | Saturday 10 January 2026 14:07:45 +0000 (0:00:00.296) 0:00:39.860 ****** 2026-01-10 14:08:07.225072 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.225083 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.225093 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.225104 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.225114 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.225125 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.225135 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.225146 | orchestrator | 2026-01-10 14:08:07.225157 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-10 14:08:07.225168 | orchestrator | Saturday 10 January 2026 14:07:46 +0000 (0:00:01.679) 0:00:41.539 ****** 2026-01-10 14:08:07.225179 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:07.225189 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:07.225200 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:07.225211 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.225222 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:07.225232 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.225243 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.225254 | orchestrator | 2026-01-10 14:08:07.225265 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-10 14:08:07.225275 | orchestrator | Saturday 10 January 2026 14:07:48 +0000 (0:00:01.006) 0:00:42.546 ****** 2026-01-10 14:08:07.225286 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.225297 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.225308 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.225319 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.225329 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.225340 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.225350 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.225361 | orchestrator | 2026-01-10 14:08:07.225372 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-10 14:08:07.225383 | orchestrator | Saturday 10 January 2026 14:07:48 +0000 (0:00:00.846) 0:00:43.393 ****** 2026-01-10 14:08:07.225394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:08:07.225407 | orchestrator | 2026-01-10 14:08:07.225418 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-10 14:08:07.225429 | orchestrator | Saturday 10 January 2026 14:07:49 +0000 (0:00:00.345) 0:00:43.738 ****** 2026-01-10 14:08:07.225440 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:07.225451 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:07.225461 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:07.225472 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:07.225483 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.225494 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.225504 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.225524 | orchestrator | 2026-01-10 14:08:07.225572 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-10 14:08:07.225585 | orchestrator | Saturday 10 January 2026 14:07:50 +0000 (0:00:01.184) 0:00:44.923 ****** 2026-01-10 14:08:07.225596 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:08:07.225606 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:07.225617 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:07.225628 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:07.225639 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:07.225649 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:07.225660 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:07.225671 | orchestrator | 2026-01-10 14:08:07.225682 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-10 14:08:07.225693 | orchestrator | Saturday 10 January 2026 14:07:50 +0000 (0:00:00.225) 0:00:45.148 ****** 2026-01-10 14:08:07.225723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:08:07.225736 | orchestrator | 2026-01-10 14:08:07.225747 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-10 14:08:07.225758 | orchestrator | Saturday 10 January 2026 14:07:50 +0000 (0:00:00.283) 0:00:45.432 ****** 2026-01-10 14:08:07.225769 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.225780 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.225790 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.225801 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.225812 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.225823 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.225833 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.225844 | orchestrator | 2026-01-10 14:08:07.225855 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-10 14:08:07.225866 | orchestrator | Saturday 10 January 2026 14:07:52 +0000 (0:00:01.592) 0:00:47.025 ****** 2026-01-10 14:08:07.225877 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:07.225888 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:07.225898 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:07.225909 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.225920 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:07.225931 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.225941 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.225952 | orchestrator | 2026-01-10 14:08:07.225963 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-10 14:08:07.225974 | orchestrator | Saturday 10 January 2026 14:07:53 +0000 (0:00:01.180) 0:00:48.205 ****** 2026-01-10 14:08:07.225985 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:07.225996 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:07.226006 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:07.226082 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:07.226097 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:07.226108 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:07.226119 | orchestrator | changed: [testbed-manager] 2026-01-10 14:08:07.226130 | orchestrator | 2026-01-10 14:08:07.226141 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-10 14:08:07.226152 | orchestrator | Saturday 10 January 2026 14:08:04 +0000 (0:00:10.955) 0:00:59.161 ****** 2026-01-10 14:08:07.226163 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.226174 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.226185 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.226195 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.226206 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.226217 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.226228 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.226238 | orchestrator | 2026-01-10 14:08:07.226258 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-10 14:08:07.226269 | orchestrator | Saturday 10 January 2026 14:08:05 +0000 (0:00:01.029) 0:01:00.190 ****** 2026-01-10 14:08:07.226280 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.226291 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.226302 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.226312 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.226323 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.226334 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.226345 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.226356 | orchestrator | 2026-01-10 14:08:07.226367 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-10 14:08:07.226378 | orchestrator | Saturday 10 January 2026 14:08:06 +0000 (0:00:00.872) 0:01:01.062 ****** 2026-01-10 14:08:07.226389 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.226400 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.226410 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.226421 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.226432 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.226442 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.226453 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.226464 | orchestrator | 2026-01-10 14:08:07.226475 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-10 14:08:07.226491 | orchestrator | Saturday 10 January 2026 14:08:06 +0000 (0:00:00.207) 0:01:01.270 ****** 2026-01-10 14:08:07.226502 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:07.226513 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:07.226524 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:07.226564 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:07.226576 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:07.226586 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:07.226597 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:07.226608 | orchestrator | 2026-01-10 14:08:07.226619 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-10 14:08:07.226630 | orchestrator | Saturday 10 January 2026 14:08:06 +0000 (0:00:00.217) 0:01:01.487 ****** 2026-01-10 14:08:07.226642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:08:07.226653 | orchestrator | 2026-01-10 14:08:07.226673 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-10 14:10:22.870719 | orchestrator | Saturday 10 January 2026 14:08:07 +0000 (0:00:00.268) 0:01:01.756 ****** 2026-01-10 14:10:22.870834 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.870852 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.870864 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.870876 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.870887 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.870897 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.870910 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.870921 | orchestrator | 2026-01-10 14:10:22.870934 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-10 14:10:22.870946 | orchestrator | Saturday 10 January 2026 14:08:08 +0000 (0:00:01.596) 0:01:03.352 ****** 2026-01-10 14:10:22.870957 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.870968 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.870979 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.870990 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.871001 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.871012 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.871023 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.871034 | orchestrator | 2026-01-10 14:10:22.871045 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-10 14:10:22.871081 | orchestrator | Saturday 10 January 2026 14:08:09 +0000 (0:00:00.625) 0:01:03.978 ****** 2026-01-10 14:10:22.871093 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.871104 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871114 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.871125 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.871136 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871147 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871157 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.871168 | orchestrator | 2026-01-10 14:10:22.871179 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-10 14:10:22.871190 | orchestrator | Saturday 10 January 2026 14:08:09 +0000 (0:00:00.207) 0:01:04.185 ****** 2026-01-10 14:10:22.871201 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.871212 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871223 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.871234 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871246 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871258 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.871270 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.871282 | orchestrator | 2026-01-10 14:10:22.871294 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-10 14:10:22.871307 | orchestrator | Saturday 10 January 2026 14:08:10 +0000 (0:00:01.171) 0:01:05.356 ****** 2026-01-10 14:10:22.871337 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.871360 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.871373 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.871385 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.871397 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.871409 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.871420 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.871432 | orchestrator | 2026-01-10 14:10:22.871445 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-10 14:10:22.871457 | orchestrator | Saturday 10 January 2026 14:08:12 +0000 (0:00:01.698) 0:01:07.055 ****** 2026-01-10 14:10:22.871469 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.871481 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871493 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871505 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.871517 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871529 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.871562 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.871574 | orchestrator | 2026-01-10 14:10:22.871587 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-10 14:10:22.871600 | orchestrator | Saturday 10 January 2026 14:08:14 +0000 (0:00:02.322) 0:01:09.377 ****** 2026-01-10 14:10:22.871612 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.871622 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.871633 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.871643 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.871654 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871664 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871675 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871686 | orchestrator | 2026-01-10 14:10:22.871696 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-10 14:10:22.871707 | orchestrator | Saturday 10 January 2026 14:08:51 +0000 (0:00:36.189) 0:01:45.567 ****** 2026-01-10 14:10:22.871718 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.871729 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.871740 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.871750 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.871761 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.871771 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.871782 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.871793 | orchestrator | 2026-01-10 14:10:22.871804 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-10 14:10:22.871822 | orchestrator | Saturday 10 January 2026 14:10:09 +0000 (0:01:18.152) 0:03:03.719 ****** 2026-01-10 14:10:22.871833 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.871860 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871871 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871882 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.871892 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.871903 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.871913 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871924 | orchestrator | 2026-01-10 14:10:22.871935 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-10 14:10:22.871946 | orchestrator | Saturday 10 January 2026 14:10:10 +0000 (0:00:01.519) 0:03:05.238 ****** 2026-01-10 14:10:22.871957 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.871968 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.871978 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.871989 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.872000 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.872010 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.872021 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.872032 | orchestrator | 2026-01-10 14:10:22.872042 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-10 14:10:22.872053 | orchestrator | Saturday 10 January 2026 14:10:21 +0000 (0:00:10.934) 0:03:16.172 ****** 2026-01-10 14:10:22.872093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-10 14:10:22.872111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-10 14:10:22.872126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-10 14:10:22.872145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:10:22.872157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:10:22.872168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-10 14:10:22.872186 | orchestrator | 2026-01-10 14:10:22.872197 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-10 14:10:22.872208 | orchestrator | Saturday 10 January 2026 14:10:22 +0000 (0:00:00.387) 0:03:16.560 ****** 2026-01-10 14:10:22.872219 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:10:22.872234 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:10:22.872245 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:22.872256 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:10:22.872267 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:22.872278 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:10:22.872288 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:22.872299 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:22.872309 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:10:22.872320 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:10:22.872331 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:10:22.872342 | orchestrator | 2026-01-10 14:10:22.872353 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-10 14:10:22.872363 | orchestrator | Saturday 10 January 2026 14:10:22 +0000 (0:00:00.743) 0:03:17.304 ****** 2026-01-10 14:10:22.872374 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:10:22.872386 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:10:22.872397 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:10:22.872407 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:10:22.872418 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:10:22.872436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:10:29.700945 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:10:29.701056 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:10:29.701073 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:10:29.701086 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:10:29.701097 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:10:29.701108 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:10:29.701119 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:10:29.701130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:10:29.701142 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:29.701154 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:10:29.701165 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:10:29.701176 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:10:29.701187 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:10:29.701198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:10:29.701231 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:10:29.701243 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:10:29.701254 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:10:29.701265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:10:29.701276 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:29.701305 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:10:29.701316 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:10:29.701327 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:10:29.701338 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:10:29.701349 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:10:29.701360 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:10:29.701371 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:10:29.701382 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:10:29.701392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:10:29.701403 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:10:29.701414 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:10:29.701425 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:10:29.701436 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:10:29.701447 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:10:29.701460 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:10:29.701477 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:10:29.701489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:10:29.701501 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:29.701513 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:29.701525 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:10:29.701564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:10:29.701576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:10:29.701588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:10:29.701601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:10:29.701638 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:10:29.701657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:10:29.701679 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701695 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:10:29.701729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:10:29.701767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:10:29.701780 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:10:29.701792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:10:29.701803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:10:29.701813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:10:29.701824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:10:29.701835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:10:29.701846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:10:29.701857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:10:29.701878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:10:29.701889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:10:29.701900 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:10:29.701911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:10:29.701922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:10:29.701933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:10:29.701943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:10:29.701954 | orchestrator | 2026-01-10 14:10:29.701967 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-10 14:10:29.701978 | orchestrator | Saturday 10 January 2026 14:10:28 +0000 (0:00:05.785) 0:03:23.090 ****** 2026-01-10 14:10:29.701989 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702000 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702010 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702085 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702097 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:10:29.702129 | orchestrator | 2026-01-10 14:10:29.702140 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-10 14:10:29.702151 | orchestrator | Saturday 10 January 2026 14:10:29 +0000 (0:00:00.639) 0:03:23.729 ****** 2026-01-10 14:10:29.702168 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:29.702179 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:29.702194 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:29.702205 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:29.702216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:29.702227 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:29.702238 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:29.702248 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:29.702259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:29.702270 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:29.702289 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:43.303656 | orchestrator | 2026-01-10 14:10:43.303747 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-10 14:10:43.303758 | orchestrator | Saturday 10 January 2026 14:10:29 +0000 (0:00:00.501) 0:03:24.230 ****** 2026-01-10 14:10:43.303766 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:43.303778 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:43.303793 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:43.303803 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:43.303813 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:43.303824 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:43.303835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:10:43.303842 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:43.303848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:43.303854 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:43.303860 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:10:43.303866 | orchestrator | 2026-01-10 14:10:43.303872 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-10 14:10:43.303878 | orchestrator | Saturday 10 January 2026 14:10:30 +0000 (0:00:00.606) 0:03:24.837 ****** 2026-01-10 14:10:43.303884 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:10:43.303890 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:43.303896 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:10:43.303902 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:43.303908 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:10:43.303914 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:43.303921 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:10:43.303930 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:43.303940 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:10:43.303950 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:10:43.303960 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:10:43.303970 | orchestrator | 2026-01-10 14:10:43.303979 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-10 14:10:43.304014 | orchestrator | Saturday 10 January 2026 14:10:30 +0000 (0:00:00.596) 0:03:25.433 ****** 2026-01-10 14:10:43.304025 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:43.304035 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:43.304045 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:43.304054 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:43.304064 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:43.304074 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:43.304085 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:43.304095 | orchestrator | 2026-01-10 14:10:43.304105 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-10 14:10:43.304115 | orchestrator | Saturday 10 January 2026 14:10:31 +0000 (0:00:00.335) 0:03:25.768 ****** 2026-01-10 14:10:43.304124 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:43.304136 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:43.304146 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:43.304156 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:43.304164 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:43.304170 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:43.304176 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:43.304182 | orchestrator | 2026-01-10 14:10:43.304188 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-10 14:10:43.304193 | orchestrator | Saturday 10 January 2026 14:10:37 +0000 (0:00:05.831) 0:03:31.600 ****** 2026-01-10 14:10:43.304199 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-10 14:10:43.304228 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-10 14:10:43.304235 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:43.304242 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:43.304248 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-10 14:10:43.304255 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-10 14:10:43.304262 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:43.304268 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-10 14:10:43.304274 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:43.304280 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-10 14:10:43.304286 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:43.304292 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:43.304298 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-10 14:10:43.304303 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:43.304309 | orchestrator | 2026-01-10 14:10:43.304315 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-10 14:10:43.304321 | orchestrator | Saturday 10 January 2026 14:10:37 +0000 (0:00:00.334) 0:03:31.934 ****** 2026-01-10 14:10:43.304327 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-10 14:10:43.304333 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-10 14:10:43.304339 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-10 14:10:43.304358 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-10 14:10:43.304368 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-10 14:10:43.304385 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-10 14:10:43.304394 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-10 14:10:43.304404 | orchestrator | 2026-01-10 14:10:43.304414 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-10 14:10:43.304423 | orchestrator | Saturday 10 January 2026 14:10:38 +0000 (0:00:01.081) 0:03:33.016 ****** 2026-01-10 14:10:43.304436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:43.304448 | orchestrator | 2026-01-10 14:10:43.304458 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-10 14:10:43.304468 | orchestrator | Saturday 10 January 2026 14:10:38 +0000 (0:00:00.514) 0:03:33.531 ****** 2026-01-10 14:10:43.304486 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:43.304496 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:43.304506 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:43.304516 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:43.304526 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:43.304552 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:43.304558 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:43.304564 | orchestrator | 2026-01-10 14:10:43.304570 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-10 14:10:43.304575 | orchestrator | Saturday 10 January 2026 14:10:40 +0000 (0:00:01.396) 0:03:34.927 ****** 2026-01-10 14:10:43.304581 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:43.304587 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:43.304597 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:43.304605 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:43.304615 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:43.304624 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:43.304634 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:43.304643 | orchestrator | 2026-01-10 14:10:43.304653 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-10 14:10:43.304662 | orchestrator | Saturday 10 January 2026 14:10:41 +0000 (0:00:00.687) 0:03:35.615 ****** 2026-01-10 14:10:43.304672 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:43.304683 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:43.304692 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:43.304702 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:43.304711 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:43.304720 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:43.304730 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:43.304739 | orchestrator | 2026-01-10 14:10:43.304749 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-10 14:10:43.304759 | orchestrator | Saturday 10 January 2026 14:10:41 +0000 (0:00:00.623) 0:03:36.239 ****** 2026-01-10 14:10:43.304767 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:43.304773 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:43.304779 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:43.304784 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:43.304790 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:43.304796 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:43.304801 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:43.304808 | orchestrator | 2026-01-10 14:10:43.304818 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-10 14:10:43.304827 | orchestrator | Saturday 10 January 2026 14:10:42 +0000 (0:00:00.585) 0:03:36.824 ****** 2026-01-10 14:10:43.304842 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052848.187678, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:43.304856 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052870.265453, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:43.304867 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052859.5229306, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:43.304902 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052866.3534234, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.278873 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052858.7124283, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279019 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052860.979805, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279044 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052874.0860415, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279061 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279085 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279152 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279201 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279247 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279266 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279282 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:10:48.279298 | orchestrator | 2026-01-10 14:10:48.279317 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-10 14:10:48.279332 | orchestrator | Saturday 10 January 2026 14:10:43 +0000 (0:00:01.005) 0:03:37.829 ****** 2026-01-10 14:10:48.279348 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:48.279364 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:48.279380 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:48.279394 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:48.279410 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:48.279425 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:48.279462 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:48.279477 | orchestrator | 2026-01-10 14:10:48.279492 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-10 14:10:48.279508 | orchestrator | Saturday 10 January 2026 14:10:44 +0000 (0:00:01.137) 0:03:38.966 ****** 2026-01-10 14:10:48.279523 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:48.279602 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:48.279619 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:48.279635 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:48.279663 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:48.279679 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:48.279694 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:48.279709 | orchestrator | 2026-01-10 14:10:48.279725 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-10 14:10:48.279741 | orchestrator | Saturday 10 January 2026 14:10:45 +0000 (0:00:01.189) 0:03:40.156 ****** 2026-01-10 14:10:48.279757 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:48.279771 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:48.279787 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:48.279802 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:48.279816 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:48.279831 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:48.279853 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:48.279868 | orchestrator | 2026-01-10 14:10:48.279883 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-10 14:10:48.279899 | orchestrator | Saturday 10 January 2026 14:10:46 +0000 (0:00:01.185) 0:03:41.341 ****** 2026-01-10 14:10:48.279913 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:48.279928 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:48.279938 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:48.279946 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:48.279955 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:48.279964 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:48.279972 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:48.279980 | orchestrator | 2026-01-10 14:10:48.279989 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-10 14:10:48.279998 | orchestrator | Saturday 10 January 2026 14:10:47 +0000 (0:00:00.295) 0:03:41.637 ****** 2026-01-10 14:10:48.280006 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:48.280015 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:48.280024 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:48.280032 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:48.280041 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:48.280049 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:48.280057 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:48.280066 | orchestrator | 2026-01-10 14:10:48.280075 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-10 14:10:48.280083 | orchestrator | Saturday 10 January 2026 14:10:47 +0000 (0:00:00.781) 0:03:42.419 ****** 2026-01-10 14:10:48.280094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:48.280105 | orchestrator | 2026-01-10 14:10:48.280114 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-10 14:10:48.280131 | orchestrator | Saturday 10 January 2026 14:10:48 +0000 (0:00:00.391) 0:03:42.810 ****** 2026-01-10 14:12:07.959337 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959433 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:07.959442 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:07.959449 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:07.959455 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:07.959462 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:07.959468 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:07.959475 | orchestrator | 2026-01-10 14:12:07.959483 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-10 14:12:07.959489 | orchestrator | Saturday 10 January 2026 14:10:56 +0000 (0:00:08.704) 0:03:51.515 ****** 2026-01-10 14:12:07.959493 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959497 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959511 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959515 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959519 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959553 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959558 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959561 | orchestrator | 2026-01-10 14:12:07.959566 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-10 14:12:07.959576 | orchestrator | Saturday 10 January 2026 14:10:58 +0000 (0:00:01.324) 0:03:52.839 ****** 2026-01-10 14:12:07.959580 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959584 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959588 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959591 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959595 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959599 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959603 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959607 | orchestrator | 2026-01-10 14:12:07.959611 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-10 14:12:07.959614 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:01.132) 0:03:53.971 ****** 2026-01-10 14:12:07.959618 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959622 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959626 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959629 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959633 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959637 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959641 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959644 | orchestrator | 2026-01-10 14:12:07.959649 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-10 14:12:07.959654 | orchestrator | Saturday 10 January 2026 14:10:59 +0000 (0:00:00.325) 0:03:54.297 ****** 2026-01-10 14:12:07.959658 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959662 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959666 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959669 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959673 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959677 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959681 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959684 | orchestrator | 2026-01-10 14:12:07.959688 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-10 14:12:07.959692 | orchestrator | Saturday 10 January 2026 14:11:00 +0000 (0:00:00.322) 0:03:54.620 ****** 2026-01-10 14:12:07.959696 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959700 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959703 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959707 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959711 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959714 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959718 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959722 | orchestrator | 2026-01-10 14:12:07.959726 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-10 14:12:07.959729 | orchestrator | Saturday 10 January 2026 14:11:00 +0000 (0:00:00.296) 0:03:54.917 ****** 2026-01-10 14:12:07.959733 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.959737 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.959741 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.959744 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.959748 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.959752 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.959756 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.959760 | orchestrator | 2026-01-10 14:12:07.959773 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-10 14:12:07.959777 | orchestrator | Saturday 10 January 2026 14:11:05 +0000 (0:00:05.539) 0:04:00.457 ****** 2026-01-10 14:12:07.959782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:12:07.959791 | orchestrator | 2026-01-10 14:12:07.959795 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-10 14:12:07.959799 | orchestrator | Saturday 10 January 2026 14:11:06 +0000 (0:00:00.402) 0:04:00.859 ****** 2026-01-10 14:12:07.959803 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959807 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-10 14:12:07.959811 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959815 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-10 14:12:07.959819 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:07.959822 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959826 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-10 14:12:07.959830 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:07.959834 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959837 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-10 14:12:07.959841 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:07.959848 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959854 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-10 14:12:07.959859 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:07.959865 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959871 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-10 14:12:07.959890 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:07.959896 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:07.959902 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-10 14:12:07.959908 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-10 14:12:07.959915 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:07.959921 | orchestrator | 2026-01-10 14:12:07.959927 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-10 14:12:07.959934 | orchestrator | Saturday 10 January 2026 14:11:06 +0000 (0:00:00.352) 0:04:01.211 ****** 2026-01-10 14:12:07.959942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:12:07.959949 | orchestrator | 2026-01-10 14:12:07.959955 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-10 14:12:07.959960 | orchestrator | Saturday 10 January 2026 14:11:07 +0000 (0:00:00.430) 0:04:01.642 ****** 2026-01-10 14:12:07.959966 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-10 14:12:07.959972 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-10 14:12:07.959978 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:07.959985 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:07.959992 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-10 14:12:07.959998 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-10 14:12:07.960003 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:07.960009 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:07.960014 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-10 14:12:07.960021 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-10 14:12:07.960030 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:07.960038 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:07.960044 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-10 14:12:07.960049 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:07.960055 | orchestrator | 2026-01-10 14:12:07.960062 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-10 14:12:07.960074 | orchestrator | Saturday 10 January 2026 14:11:07 +0000 (0:00:00.325) 0:04:01.967 ****** 2026-01-10 14:12:07.960081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:12:07.960089 | orchestrator | 2026-01-10 14:12:07.960097 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-10 14:12:07.960104 | orchestrator | Saturday 10 January 2026 14:11:07 +0000 (0:00:00.442) 0:04:02.409 ****** 2026-01-10 14:12:07.960110 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:07.960116 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:07.960121 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:07.960128 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:07.960134 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:07.960141 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:07.960148 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:07.960154 | orchestrator | 2026-01-10 14:12:07.960161 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-10 14:12:07.960168 | orchestrator | Saturday 10 January 2026 14:11:41 +0000 (0:00:33.435) 0:04:35.844 ****** 2026-01-10 14:12:07.960174 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:07.960181 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:07.960189 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:07.960194 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:07.960198 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:07.960202 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:07.960206 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:07.960210 | orchestrator | 2026-01-10 14:12:07.960214 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-10 14:12:07.960218 | orchestrator | Saturday 10 January 2026 14:11:50 +0000 (0:00:08.852) 0:04:44.697 ****** 2026-01-10 14:12:07.960221 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:07.960225 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:07.960229 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:07.960232 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:07.960238 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:07.960244 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:07.960250 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:07.960256 | orchestrator | 2026-01-10 14:12:07.960263 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-10 14:12:07.960268 | orchestrator | Saturday 10 January 2026 14:11:59 +0000 (0:00:09.212) 0:04:53.909 ****** 2026-01-10 14:12:07.960274 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:07.960280 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:07.960287 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:07.960293 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:07.960299 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:07.960306 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:07.960312 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:07.960318 | orchestrator | 2026-01-10 14:12:07.960325 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-10 14:12:07.960331 | orchestrator | Saturday 10 January 2026 14:12:01 +0000 (0:00:02.086) 0:04:55.996 ****** 2026-01-10 14:12:07.960337 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:07.960343 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:07.960349 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:07.960356 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:07.960362 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:07.960368 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:07.960374 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:07.960380 | orchestrator | 2026-01-10 14:12:07.960393 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-10 14:12:19.614132 | orchestrator | Saturday 10 January 2026 14:12:07 +0000 (0:00:06.491) 0:05:02.487 ****** 2026-01-10 14:12:19.614330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:12:19.614362 | orchestrator | 2026-01-10 14:12:19.614385 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-10 14:12:19.614405 | orchestrator | Saturday 10 January 2026 14:12:08 +0000 (0:00:00.433) 0:05:02.921 ****** 2026-01-10 14:12:19.614425 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:19.614445 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:19.614464 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:19.614485 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:19.614505 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:19.614548 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:19.614560 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:19.614571 | orchestrator | 2026-01-10 14:12:19.614584 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-10 14:12:19.614597 | orchestrator | Saturday 10 January 2026 14:12:09 +0000 (0:00:00.739) 0:05:03.661 ****** 2026-01-10 14:12:19.614610 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:19.614624 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:19.614636 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:19.614648 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:19.614661 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:19.614673 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:19.614708 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:19.614721 | orchestrator | 2026-01-10 14:12:19.614733 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-10 14:12:19.614746 | orchestrator | Saturday 10 January 2026 14:12:10 +0000 (0:00:01.834) 0:05:05.496 ****** 2026-01-10 14:12:19.614759 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:19.614771 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:19.614784 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:19.614796 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:19.614808 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:19.614820 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:19.614832 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:19.614844 | orchestrator | 2026-01-10 14:12:19.614863 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-10 14:12:19.614882 | orchestrator | Saturday 10 January 2026 14:12:11 +0000 (0:00:00.823) 0:05:06.319 ****** 2026-01-10 14:12:19.614901 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.614919 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.614939 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.614957 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:19.614976 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:19.614996 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:19.615031 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:19.615050 | orchestrator | 2026-01-10 14:12:19.615061 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-10 14:12:19.615072 | orchestrator | Saturday 10 January 2026 14:12:12 +0000 (0:00:00.291) 0:05:06.610 ****** 2026-01-10 14:12:19.615083 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.615094 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.615104 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.615115 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:19.615125 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:19.615136 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:19.615147 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:19.615157 | orchestrator | 2026-01-10 14:12:19.615168 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-10 14:12:19.615179 | orchestrator | Saturday 10 January 2026 14:12:12 +0000 (0:00:00.425) 0:05:07.036 ****** 2026-01-10 14:12:19.615202 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:19.615220 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:19.615231 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:19.615242 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:19.615253 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:19.615264 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:19.615275 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:19.615285 | orchestrator | 2026-01-10 14:12:19.615296 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-10 14:12:19.615307 | orchestrator | Saturday 10 January 2026 14:12:12 +0000 (0:00:00.301) 0:05:07.337 ****** 2026-01-10 14:12:19.615318 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.615328 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.615339 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.615350 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:19.615360 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:19.615371 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:19.615381 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:19.615392 | orchestrator | 2026-01-10 14:12:19.615403 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-10 14:12:19.615415 | orchestrator | Saturday 10 January 2026 14:12:13 +0000 (0:00:00.320) 0:05:07.658 ****** 2026-01-10 14:12:19.615425 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:19.615436 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:19.615447 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:19.615457 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:19.615468 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:19.615478 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:19.615489 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:19.615500 | orchestrator | 2026-01-10 14:12:19.615510 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-10 14:12:19.615541 | orchestrator | Saturday 10 January 2026 14:12:13 +0000 (0:00:00.309) 0:05:07.968 ****** 2026-01-10 14:12:19.615552 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:12:19.615563 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615574 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:12:19.615584 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615595 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:12:19.615606 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615624 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:12:19.615643 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615698 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:12:19.615724 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615742 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:12:19.615761 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615778 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:12:19.615795 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:12:19.615815 | orchestrator | 2026-01-10 14:12:19.615834 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-10 14:12:19.615853 | orchestrator | Saturday 10 January 2026 14:12:13 +0000 (0:00:00.301) 0:05:08.270 ****** 2026-01-10 14:12:19.615872 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:12:19.615891 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.615911 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:12:19.615930 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.615948 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:12:19.615967 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.615987 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:12:19.616006 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.616025 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:12:19.616042 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.616060 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:12:19.616079 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.616114 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:12:19.616134 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:12:19.616151 | orchestrator | 2026-01-10 14:12:19.616169 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-10 14:12:19.616181 | orchestrator | Saturday 10 January 2026 14:12:14 +0000 (0:00:00.308) 0:05:08.578 ****** 2026-01-10 14:12:19.616191 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.616202 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.616213 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.616223 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:19.616233 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:19.616244 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:19.616255 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:19.616265 | orchestrator | 2026-01-10 14:12:19.616276 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-10 14:12:19.616287 | orchestrator | Saturday 10 January 2026 14:12:14 +0000 (0:00:00.273) 0:05:08.852 ****** 2026-01-10 14:12:19.616298 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.616308 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.616319 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.616329 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:19.616340 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:19.616351 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:19.616361 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:19.616372 | orchestrator | 2026-01-10 14:12:19.616383 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-10 14:12:19.616394 | orchestrator | Saturday 10 January 2026 14:12:14 +0000 (0:00:00.282) 0:05:09.135 ****** 2026-01-10 14:12:19.616408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:12:19.616421 | orchestrator | 2026-01-10 14:12:19.616432 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-10 14:12:19.616443 | orchestrator | Saturday 10 January 2026 14:12:15 +0000 (0:00:00.423) 0:05:09.558 ****** 2026-01-10 14:12:19.616454 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:19.616465 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:19.616476 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:19.616486 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:19.616497 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:19.616507 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:19.616686 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:19.616823 | orchestrator | 2026-01-10 14:12:19.616866 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-10 14:12:19.616877 | orchestrator | Saturday 10 January 2026 14:12:16 +0000 (0:00:01.017) 0:05:10.576 ****** 2026-01-10 14:12:19.616883 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:19.616889 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:19.616895 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:19.616900 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:19.616906 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:19.616912 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:19.616918 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:19.616923 | orchestrator | 2026-01-10 14:12:19.616929 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-10 14:12:19.616936 | orchestrator | Saturday 10 January 2026 14:12:19 +0000 (0:00:03.155) 0:05:13.731 ****** 2026-01-10 14:12:19.616942 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-10 14:12:19.616948 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-10 14:12:19.616955 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-10 14:12:19.616961 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-10 14:12:19.616985 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-10 14:12:19.616990 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-10 14:12:19.616996 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:19.617002 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-10 14:12:19.617007 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-10 14:12:19.617012 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-10 14:12:19.617018 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:19.617023 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-10 14:12:19.617029 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-10 14:12:19.617034 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-10 14:12:19.617040 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:19.617045 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-10 14:12:19.617071 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-10 14:13:22.883554 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-10 14:13:22.883681 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:22.883699 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-10 14:13:22.883711 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-10 14:13:22.883723 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-10 14:13:22.883734 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:22.883745 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:22.883755 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-10 14:13:22.883766 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-10 14:13:22.883777 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-10 14:13:22.883788 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:22.883799 | orchestrator | 2026-01-10 14:13:22.883812 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-10 14:13:22.883825 | orchestrator | Saturday 10 January 2026 14:12:19 +0000 (0:00:00.625) 0:05:14.357 ****** 2026-01-10 14:13:22.883835 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.883846 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.883857 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.883867 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.883878 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.883888 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.883899 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.883909 | orchestrator | 2026-01-10 14:13:22.883920 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-10 14:13:22.883931 | orchestrator | Saturday 10 January 2026 14:12:27 +0000 (0:00:07.796) 0:05:22.154 ****** 2026-01-10 14:13:22.883941 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.883952 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.883962 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.883973 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.883983 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.883994 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884004 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884015 | orchestrator | 2026-01-10 14:13:22.884025 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-10 14:13:22.884036 | orchestrator | Saturday 10 January 2026 14:12:28 +0000 (0:00:01.041) 0:05:23.196 ****** 2026-01-10 14:13:22.884048 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.884060 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884072 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884084 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884098 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884116 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884135 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884186 | orchestrator | 2026-01-10 14:13:22.884200 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-10 14:13:22.884212 | orchestrator | Saturday 10 January 2026 14:12:36 +0000 (0:00:08.221) 0:05:31.418 ****** 2026-01-10 14:13:22.884224 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:22.884236 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884248 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884260 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884273 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884285 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884298 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884310 | orchestrator | 2026-01-10 14:13:22.884322 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-10 14:13:22.884335 | orchestrator | Saturday 10 January 2026 14:12:40 +0000 (0:00:03.160) 0:05:34.578 ****** 2026-01-10 14:13:22.884347 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.884359 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884371 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884383 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884395 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884430 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884450 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884466 | orchestrator | 2026-01-10 14:13:22.884511 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-10 14:13:22.884531 | orchestrator | Saturday 10 January 2026 14:12:41 +0000 (0:00:01.310) 0:05:35.889 ****** 2026-01-10 14:13:22.884550 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.884569 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884587 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884604 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884615 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884626 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884636 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884647 | orchestrator | 2026-01-10 14:13:22.884657 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-10 14:13:22.884668 | orchestrator | Saturday 10 January 2026 14:12:42 +0000 (0:00:01.556) 0:05:37.445 ****** 2026-01-10 14:13:22.884679 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:22.884689 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:22.884700 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:22.884710 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:22.884721 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:22.884731 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:22.884742 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:22.884753 | orchestrator | 2026-01-10 14:13:22.884764 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-10 14:13:22.884775 | orchestrator | Saturday 10 January 2026 14:12:43 +0000 (0:00:00.596) 0:05:38.042 ****** 2026-01-10 14:13:22.884785 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.884796 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884806 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884816 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884827 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884837 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884848 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884858 | orchestrator | 2026-01-10 14:13:22.884869 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-10 14:13:22.884898 | orchestrator | Saturday 10 January 2026 14:12:53 +0000 (0:00:10.324) 0:05:48.366 ****** 2026-01-10 14:13:22.884910 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:22.884920 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.884931 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.884942 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.884963 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.884974 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.884985 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.884995 | orchestrator | 2026-01-10 14:13:22.885006 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-10 14:13:22.885017 | orchestrator | Saturday 10 January 2026 14:12:54 +0000 (0:00:00.946) 0:05:49.313 ****** 2026-01-10 14:13:22.885027 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.885038 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.885048 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.885059 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.885069 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.885080 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.885090 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.885101 | orchestrator | 2026-01-10 14:13:22.885111 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-10 14:13:22.885122 | orchestrator | Saturday 10 January 2026 14:13:04 +0000 (0:00:09.574) 0:05:58.888 ****** 2026-01-10 14:13:22.885133 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.885143 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.885159 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.885184 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.885207 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.885224 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.885244 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.885262 | orchestrator | 2026-01-10 14:13:22.885280 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-10 14:13:22.885298 | orchestrator | Saturday 10 January 2026 14:13:16 +0000 (0:00:11.732) 0:06:10.620 ****** 2026-01-10 14:13:22.885309 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-10 14:13:22.885319 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-10 14:13:22.885330 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-10 14:13:22.885340 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-10 14:13:22.885351 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-10 14:13:22.885362 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-10 14:13:22.885372 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-10 14:13:22.885383 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-10 14:13:22.885394 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-10 14:13:22.885404 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-10 14:13:22.885415 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-10 14:13:22.885426 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-10 14:13:22.885436 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-10 14:13:22.885447 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-10 14:13:22.885457 | orchestrator | 2026-01-10 14:13:22.885468 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-10 14:13:22.885535 | orchestrator | Saturday 10 January 2026 14:13:17 +0000 (0:00:01.361) 0:06:11.982 ****** 2026-01-10 14:13:22.885549 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:22.885560 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:22.885570 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:22.885581 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:22.885591 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:22.885602 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:22.885612 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:22.885623 | orchestrator | 2026-01-10 14:13:22.885633 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-10 14:13:22.885652 | orchestrator | Saturday 10 January 2026 14:13:17 +0000 (0:00:00.528) 0:06:12.510 ****** 2026-01-10 14:13:22.885663 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:22.885684 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:22.885695 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:22.885705 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:22.885716 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:22.885727 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:22.885737 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:22.885748 | orchestrator | 2026-01-10 14:13:22.885759 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-10 14:13:22.885771 | orchestrator | Saturday 10 January 2026 14:13:21 +0000 (0:00:03.903) 0:06:16.414 ****** 2026-01-10 14:13:22.885781 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:22.885792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:22.885802 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:22.885813 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:22.885824 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:22.885834 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:22.885845 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:22.885855 | orchestrator | 2026-01-10 14:13:22.885867 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-10 14:13:22.885879 | orchestrator | Saturday 10 January 2026 14:13:22 +0000 (0:00:00.501) 0:06:16.916 ****** 2026-01-10 14:13:22.885890 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-10 14:13:22.885901 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-10 14:13:22.885919 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:22.885947 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-10 14:13:22.885968 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-10 14:13:22.885985 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:22.886003 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-10 14:13:22.886101 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-10 14:13:22.886124 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:22.886157 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-10 14:13:42.093827 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-10 14:13:42.093956 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:42.093971 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-10 14:13:42.093981 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-10 14:13:42.093989 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:42.093998 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-10 14:13:42.094006 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-10 14:13:42.094061 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:42.094073 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-10 14:13:42.094081 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-10 14:13:42.094090 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:42.094099 | orchestrator | 2026-01-10 14:13:42.094109 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-10 14:13:42.094119 | orchestrator | Saturday 10 January 2026 14:13:23 +0000 (0:00:00.739) 0:06:17.655 ****** 2026-01-10 14:13:42.094127 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:42.094136 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:42.094144 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:42.094152 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:42.094160 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:42.094172 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:42.094223 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:42.094241 | orchestrator | 2026-01-10 14:13:42.094254 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-10 14:13:42.094269 | orchestrator | Saturday 10 January 2026 14:13:23 +0000 (0:00:00.500) 0:06:18.156 ****** 2026-01-10 14:13:42.094312 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:42.094326 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:42.094334 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:42.094342 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:42.094351 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:42.094360 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:42.094370 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:42.094379 | orchestrator | 2026-01-10 14:13:42.094389 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-10 14:13:42.094398 | orchestrator | Saturday 10 January 2026 14:13:24 +0000 (0:00:00.473) 0:06:18.629 ****** 2026-01-10 14:13:42.094408 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:42.094417 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:42.094431 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:42.094451 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:42.094490 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:42.094503 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:42.094516 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:42.094531 | orchestrator | 2026-01-10 14:13:42.094545 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-10 14:13:42.094559 | orchestrator | Saturday 10 January 2026 14:13:24 +0000 (0:00:00.502) 0:06:19.132 ****** 2026-01-10 14:13:42.094605 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.094614 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.094624 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.094633 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.094642 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.094651 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.094660 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.094669 | orchestrator | 2026-01-10 14:13:42.094678 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-10 14:13:42.094688 | orchestrator | Saturday 10 January 2026 14:13:26 +0000 (0:00:01.918) 0:06:21.051 ****** 2026-01-10 14:13:42.094699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:13:42.094710 | orchestrator | 2026-01-10 14:13:42.094720 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-10 14:13:42.094728 | orchestrator | Saturday 10 January 2026 14:13:27 +0000 (0:00:00.818) 0:06:21.869 ****** 2026-01-10 14:13:42.094736 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.094744 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:42.094751 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:42.094759 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:42.094767 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:42.094775 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:42.094783 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:42.094790 | orchestrator | 2026-01-10 14:13:42.094798 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-10 14:13:42.094807 | orchestrator | Saturday 10 January 2026 14:13:28 +0000 (0:00:00.839) 0:06:22.709 ****** 2026-01-10 14:13:42.094815 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.094823 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:42.094831 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:42.094839 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:42.094846 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:42.094854 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:42.094862 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:42.094870 | orchestrator | 2026-01-10 14:13:42.094878 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-10 14:13:42.094886 | orchestrator | Saturday 10 January 2026 14:13:29 +0000 (0:00:00.918) 0:06:23.627 ****** 2026-01-10 14:13:42.094903 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.094911 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:42.094919 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:42.094927 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:42.094976 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:42.094985 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:42.094993 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:42.095001 | orchestrator | 2026-01-10 14:13:42.095009 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-10 14:13:42.095035 | orchestrator | Saturday 10 January 2026 14:13:30 +0000 (0:00:01.554) 0:06:25.181 ****** 2026-01-10 14:13:42.095044 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:42.095052 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.095060 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.095068 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.095076 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.095083 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.095091 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.095099 | orchestrator | 2026-01-10 14:13:42.095107 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-10 14:13:42.095115 | orchestrator | Saturday 10 January 2026 14:13:32 +0000 (0:00:01.585) 0:06:26.767 ****** 2026-01-10 14:13:42.095123 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.095131 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:42.095139 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:42.095146 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:42.095154 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:42.095163 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:42.095176 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:42.095194 | orchestrator | 2026-01-10 14:13:42.095210 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-10 14:13:42.095223 | orchestrator | Saturday 10 January 2026 14:13:33 +0000 (0:00:01.321) 0:06:28.088 ****** 2026-01-10 14:13:42.095237 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:42.095251 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:42.095265 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:42.095278 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:42.095291 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:42.095298 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:42.095306 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:42.095314 | orchestrator | 2026-01-10 14:13:42.095322 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-10 14:13:42.095330 | orchestrator | Saturday 10 January 2026 14:13:34 +0000 (0:00:01.412) 0:06:29.500 ****** 2026-01-10 14:13:42.095342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:13:42.095362 | orchestrator | 2026-01-10 14:13:42.095377 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-10 14:13:42.095390 | orchestrator | Saturday 10 January 2026 14:13:35 +0000 (0:00:00.972) 0:06:30.472 ****** 2026-01-10 14:13:42.095402 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.095414 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.095426 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.095438 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.095451 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.095489 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.095501 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.095514 | orchestrator | 2026-01-10 14:13:42.095527 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-10 14:13:42.095541 | orchestrator | Saturday 10 January 2026 14:13:37 +0000 (0:00:01.374) 0:06:31.847 ****** 2026-01-10 14:13:42.095554 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.095583 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.095597 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.095611 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.095624 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.095638 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.095651 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.095664 | orchestrator | 2026-01-10 14:13:42.095677 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-10 14:13:42.095691 | orchestrator | Saturday 10 January 2026 14:13:38 +0000 (0:00:01.168) 0:06:33.015 ****** 2026-01-10 14:13:42.095705 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.095717 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.095730 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.095743 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.095764 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.095778 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.095868 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.095881 | orchestrator | 2026-01-10 14:13:42.095895 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-10 14:13:42.095909 | orchestrator | Saturday 10 January 2026 14:13:39 +0000 (0:00:01.150) 0:06:34.166 ****** 2026-01-10 14:13:42.095923 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:42.095934 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:42.095942 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:42.095950 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:42.095958 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:42.095966 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:42.095974 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:42.095982 | orchestrator | 2026-01-10 14:13:42.095990 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-10 14:13:42.095998 | orchestrator | Saturday 10 January 2026 14:13:40 +0000 (0:00:01.327) 0:06:35.494 ****** 2026-01-10 14:13:42.096006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:13:42.096014 | orchestrator | 2026-01-10 14:13:42.096022 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:13:42.096030 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.835) 0:06:36.330 ****** 2026-01-10 14:13:42.096038 | orchestrator | 2026-01-10 14:13:42.096046 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:13:42.096054 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.038) 0:06:36.368 ****** 2026-01-10 14:13:42.096062 | orchestrator | 2026-01-10 14:13:42.096070 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:13:42.096077 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.037) 0:06:36.406 ****** 2026-01-10 14:13:42.096085 | orchestrator | 2026-01-10 14:13:42.096093 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:13:42.096112 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.044) 0:06:36.450 ****** 2026-01-10 14:14:08.563164 | orchestrator | 2026-01-10 14:14:08.563289 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:14:08.563307 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.037) 0:06:36.488 ****** 2026-01-10 14:14:08.563320 | orchestrator | 2026-01-10 14:14:08.563332 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:14:08.563343 | orchestrator | Saturday 10 January 2026 14:13:41 +0000 (0:00:00.038) 0:06:36.526 ****** 2026-01-10 14:14:08.563354 | orchestrator | 2026-01-10 14:14:08.563366 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:14:08.563377 | orchestrator | Saturday 10 January 2026 14:13:42 +0000 (0:00:00.044) 0:06:36.571 ****** 2026-01-10 14:14:08.563388 | orchestrator | 2026-01-10 14:14:08.563399 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:14:08.563473 | orchestrator | Saturday 10 January 2026 14:13:42 +0000 (0:00:00.038) 0:06:36.610 ****** 2026-01-10 14:14:08.563486 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:08.563499 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:08.563510 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:08.563520 | orchestrator | 2026-01-10 14:14:08.563532 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-10 14:14:08.563543 | orchestrator | Saturday 10 January 2026 14:13:43 +0000 (0:00:01.224) 0:06:37.834 ****** 2026-01-10 14:14:08.563554 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:08.563566 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:08.563577 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:08.563588 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:08.563599 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:08.563610 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:08.563621 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:08.563632 | orchestrator | 2026-01-10 14:14:08.563643 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-10 14:14:08.563654 | orchestrator | Saturday 10 January 2026 14:13:44 +0000 (0:00:01.687) 0:06:39.521 ****** 2026-01-10 14:14:08.563665 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:08.563677 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:08.563687 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:08.563698 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:08.563709 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:08.563720 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:08.563731 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:08.563741 | orchestrator | 2026-01-10 14:14:08.563753 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-10 14:14:08.563763 | orchestrator | Saturday 10 January 2026 14:13:46 +0000 (0:00:01.192) 0:06:40.714 ****** 2026-01-10 14:14:08.563774 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:08.563785 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:08.563796 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:08.563807 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:08.563818 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:08.563829 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:08.563840 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:08.563851 | orchestrator | 2026-01-10 14:14:08.563862 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-10 14:14:08.563873 | orchestrator | Saturday 10 January 2026 14:13:48 +0000 (0:00:02.348) 0:06:43.063 ****** 2026-01-10 14:14:08.563883 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:08.563894 | orchestrator | 2026-01-10 14:14:08.563905 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-10 14:14:08.563916 | orchestrator | Saturday 10 January 2026 14:13:48 +0000 (0:00:00.103) 0:06:43.167 ****** 2026-01-10 14:14:08.563927 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.563939 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:08.563957 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:08.563976 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:08.563987 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:08.563998 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:08.564009 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:08.564020 | orchestrator | 2026-01-10 14:14:08.564035 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-10 14:14:08.564052 | orchestrator | Saturday 10 January 2026 14:13:49 +0000 (0:00:01.020) 0:06:44.187 ****** 2026-01-10 14:14:08.564069 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:08.564085 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:08.564096 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:08.564107 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:08.564121 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:08.564151 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:08.564165 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:08.564177 | orchestrator | 2026-01-10 14:14:08.564188 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-10 14:14:08.564199 | orchestrator | Saturday 10 January 2026 14:13:50 +0000 (0:00:00.538) 0:06:44.725 ****** 2026-01-10 14:14:08.564211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:08.564224 | orchestrator | 2026-01-10 14:14:08.564236 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-10 14:14:08.564247 | orchestrator | Saturday 10 January 2026 14:13:51 +0000 (0:00:01.092) 0:06:45.818 ****** 2026-01-10 14:14:08.564258 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.564269 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:08.564280 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:08.564298 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:08.564311 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:08.564322 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:08.564333 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:08.564344 | orchestrator | 2026-01-10 14:14:08.564355 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-10 14:14:08.564366 | orchestrator | Saturday 10 January 2026 14:13:52 +0000 (0:00:00.862) 0:06:46.680 ****** 2026-01-10 14:14:08.564377 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-10 14:14:08.564407 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-10 14:14:08.564419 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-10 14:14:08.564473 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-10 14:14:08.564485 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-10 14:14:08.564496 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-10 14:14:08.564507 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-10 14:14:08.564518 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-10 14:14:08.564529 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-10 14:14:08.564540 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-10 14:14:08.564551 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-10 14:14:08.564562 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-10 14:14:08.564573 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-10 14:14:08.564584 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-10 14:14:08.564596 | orchestrator | 2026-01-10 14:14:08.564607 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-10 14:14:08.564618 | orchestrator | Saturday 10 January 2026 14:13:54 +0000 (0:00:02.592) 0:06:49.273 ****** 2026-01-10 14:14:08.564629 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:08.564640 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:08.564651 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:08.564663 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:08.564673 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:08.564685 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:08.564696 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:08.564707 | orchestrator | 2026-01-10 14:14:08.564718 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-10 14:14:08.564729 | orchestrator | Saturday 10 January 2026 14:13:55 +0000 (0:00:00.654) 0:06:49.928 ****** 2026-01-10 14:14:08.564743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:08.564763 | orchestrator | 2026-01-10 14:14:08.564775 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-10 14:14:08.564786 | orchestrator | Saturday 10 January 2026 14:13:56 +0000 (0:00:00.859) 0:06:50.787 ****** 2026-01-10 14:14:08.564797 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.564808 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:08.564826 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:08.564839 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:08.564850 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:08.564861 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:08.564871 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:08.564882 | orchestrator | 2026-01-10 14:14:08.564894 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-10 14:14:08.564905 | orchestrator | Saturday 10 January 2026 14:13:57 +0000 (0:00:00.905) 0:06:51.692 ****** 2026-01-10 14:14:08.564916 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.564927 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:08.564938 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:08.564949 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:08.564959 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:08.564970 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:08.564981 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:08.564992 | orchestrator | 2026-01-10 14:14:08.565003 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-10 14:14:08.565029 | orchestrator | Saturday 10 January 2026 14:13:58 +0000 (0:00:01.030) 0:06:52.723 ****** 2026-01-10 14:14:08.565040 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:08.565051 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:08.565062 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:08.565073 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:08.565084 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:08.565095 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:08.565121 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:08.565133 | orchestrator | 2026-01-10 14:14:08.565155 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-10 14:14:08.565166 | orchestrator | Saturday 10 January 2026 14:13:58 +0000 (0:00:00.509) 0:06:53.232 ****** 2026-01-10 14:14:08.565177 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:08.565188 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:08.565199 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.565210 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:08.565221 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:08.565231 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:08.565242 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:08.565253 | orchestrator | 2026-01-10 14:14:08.565264 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-10 14:14:08.565275 | orchestrator | Saturday 10 January 2026 14:14:00 +0000 (0:00:01.601) 0:06:54.834 ****** 2026-01-10 14:14:08.565286 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:08.565297 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:08.565308 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:08.565319 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:08.565330 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:08.565340 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:08.565351 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:08.565362 | orchestrator | 2026-01-10 14:14:08.565373 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-10 14:14:08.565384 | orchestrator | Saturday 10 January 2026 14:14:00 +0000 (0:00:00.640) 0:06:55.474 ****** 2026-01-10 14:14:08.565395 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:08.565406 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:08.565417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:08.565450 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:08.565462 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:08.565473 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:08.565500 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:41.422145 | orchestrator | 2026-01-10 14:14:41.422251 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-10 14:14:41.422259 | orchestrator | Saturday 10 January 2026 14:14:08 +0000 (0:00:07.611) 0:07:03.086 ****** 2026-01-10 14:14:41.422264 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422270 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:41.422276 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:41.422280 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:41.422285 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:41.422289 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:41.422293 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:41.422298 | orchestrator | 2026-01-10 14:14:41.422302 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-10 14:14:41.422306 | orchestrator | Saturday 10 January 2026 14:14:10 +0000 (0:00:01.717) 0:07:04.803 ****** 2026-01-10 14:14:41.422311 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422315 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:41.422319 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:41.422323 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:41.422327 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:41.422331 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:41.422335 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:41.422339 | orchestrator | 2026-01-10 14:14:41.422344 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-10 14:14:41.422349 | orchestrator | Saturday 10 January 2026 14:14:12 +0000 (0:00:01.771) 0:07:06.574 ****** 2026-01-10 14:14:41.422353 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422357 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:41.422361 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:41.422365 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:41.422370 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:41.422374 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:41.422378 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:41.422382 | orchestrator | 2026-01-10 14:14:41.422386 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:14:41.422390 | orchestrator | Saturday 10 January 2026 14:14:13 +0000 (0:00:01.688) 0:07:08.262 ****** 2026-01-10 14:14:41.422395 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422399 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.422431 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.422436 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.422440 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.422444 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.422448 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.422453 | orchestrator | 2026-01-10 14:14:41.422457 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:14:41.422461 | orchestrator | Saturday 10 January 2026 14:14:14 +0000 (0:00:00.910) 0:07:09.173 ****** 2026-01-10 14:14:41.422465 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:41.422469 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:41.422474 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:41.422478 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:41.422482 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:41.422486 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:41.422490 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:41.422494 | orchestrator | 2026-01-10 14:14:41.422498 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-10 14:14:41.422503 | orchestrator | Saturday 10 January 2026 14:14:15 +0000 (0:00:00.961) 0:07:10.135 ****** 2026-01-10 14:14:41.422507 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:41.422511 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:41.422516 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:41.422549 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:41.422556 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:41.422562 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:41.422578 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:41.422583 | orchestrator | 2026-01-10 14:14:41.422590 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-10 14:14:41.422621 | orchestrator | Saturday 10 January 2026 14:14:16 +0000 (0:00:00.533) 0:07:10.668 ****** 2026-01-10 14:14:41.422629 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422635 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.422642 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.422649 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.422657 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.422664 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.422670 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.422677 | orchestrator | 2026-01-10 14:14:41.422685 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-10 14:14:41.422693 | orchestrator | Saturday 10 January 2026 14:14:16 +0000 (0:00:00.527) 0:07:11.196 ****** 2026-01-10 14:14:41.422700 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422707 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.422714 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.422721 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.422728 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.422734 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.422740 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.422747 | orchestrator | 2026-01-10 14:14:41.422755 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-10 14:14:41.422762 | orchestrator | Saturday 10 January 2026 14:14:17 +0000 (0:00:00.532) 0:07:11.728 ****** 2026-01-10 14:14:41.422768 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422774 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.422781 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.422787 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.422794 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.422800 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.422807 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.422813 | orchestrator | 2026-01-10 14:14:41.422820 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-10 14:14:41.422826 | orchestrator | Saturday 10 January 2026 14:14:17 +0000 (0:00:00.698) 0:07:12.427 ****** 2026-01-10 14:14:41.422834 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.422841 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.422850 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.422857 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.422863 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.422870 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.422877 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.422885 | orchestrator | 2026-01-10 14:14:41.422912 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-10 14:14:41.422919 | orchestrator | Saturday 10 January 2026 14:14:23 +0000 (0:00:05.612) 0:07:18.039 ****** 2026-01-10 14:14:41.422927 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:41.422936 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:41.422944 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:41.422950 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:41.422957 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:41.422964 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:41.422970 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:41.422976 | orchestrator | 2026-01-10 14:14:41.422984 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-10 14:14:41.422991 | orchestrator | Saturday 10 January 2026 14:14:24 +0000 (0:00:00.560) 0:07:18.599 ****** 2026-01-10 14:14:41.423000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:41.423022 | orchestrator | 2026-01-10 14:14:41.423028 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-10 14:14:41.423035 | orchestrator | Saturday 10 January 2026 14:14:25 +0000 (0:00:01.070) 0:07:19.670 ****** 2026-01-10 14:14:41.423041 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.423048 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.423055 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.423061 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.423068 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.423074 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.423081 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.423087 | orchestrator | 2026-01-10 14:14:41.423095 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-10 14:14:41.423101 | orchestrator | Saturday 10 January 2026 14:14:27 +0000 (0:00:01.978) 0:07:21.648 ****** 2026-01-10 14:14:41.423108 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.423115 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.423121 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.423126 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.423130 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.423134 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.423138 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.423142 | orchestrator | 2026-01-10 14:14:41.423146 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-10 14:14:41.423150 | orchestrator | Saturday 10 January 2026 14:14:28 +0000 (0:00:01.182) 0:07:22.831 ****** 2026-01-10 14:14:41.423154 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:41.423158 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:41.423162 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:41.423166 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:41.423170 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:41.423174 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:41.423178 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:41.423182 | orchestrator | 2026-01-10 14:14:41.423186 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-10 14:14:41.423190 | orchestrator | Saturday 10 January 2026 14:14:29 +0000 (0:00:00.839) 0:07:23.670 ****** 2026-01-10 14:14:41.423195 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423201 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423206 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423210 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423214 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423219 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423223 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:14:41.423227 | orchestrator | 2026-01-10 14:14:41.423231 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-10 14:14:41.423235 | orchestrator | Saturday 10 January 2026 14:14:31 +0000 (0:00:01.951) 0:07:25.622 ****** 2026-01-10 14:14:41.423240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:41.423249 | orchestrator | 2026-01-10 14:14:41.423254 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-10 14:14:41.423258 | orchestrator | Saturday 10 January 2026 14:14:31 +0000 (0:00:00.788) 0:07:26.411 ****** 2026-01-10 14:14:41.423262 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:41.423266 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:41.423270 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:41.423274 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:41.423278 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:41.423282 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:41.423286 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:41.423290 | orchestrator | 2026-01-10 14:14:41.423301 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-10 14:15:14.043810 | orchestrator | Saturday 10 January 2026 14:14:41 +0000 (0:00:09.534) 0:07:35.945 ****** 2026-01-10 14:15:14.043908 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:14.043924 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:14.043934 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:14.043943 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:14.043951 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:14.043960 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:14.043969 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:14.043977 | orchestrator | 2026-01-10 14:15:14.043987 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-10 14:15:14.043994 | orchestrator | Saturday 10 January 2026 14:14:43 +0000 (0:00:02.007) 0:07:37.953 ****** 2026-01-10 14:15:14.044003 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:14.044011 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:14.044020 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:14.044028 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:14.044037 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:14.044044 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:14.044051 | orchestrator | 2026-01-10 14:15:14.044059 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-10 14:15:14.044067 | orchestrator | Saturday 10 January 2026 14:14:44 +0000 (0:00:01.386) 0:07:39.340 ****** 2026-01-10 14:15:14.044092 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044101 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044108 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044116 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044123 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.044131 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.044138 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044146 | orchestrator | 2026-01-10 14:15:14.044153 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-10 14:15:14.044160 | orchestrator | 2026-01-10 14:15:14.044168 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-10 14:15:14.044175 | orchestrator | Saturday 10 January 2026 14:14:46 +0000 (0:00:01.231) 0:07:40.571 ****** 2026-01-10 14:15:14.044183 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:15:14.044190 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:14.044198 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:14.044205 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:14.044213 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:14.044221 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:14.044229 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:14.044237 | orchestrator | 2026-01-10 14:15:14.044245 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-10 14:15:14.044253 | orchestrator | 2026-01-10 14:15:14.044261 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-10 14:15:14.044269 | orchestrator | Saturday 10 January 2026 14:14:46 +0000 (0:00:00.799) 0:07:41.370 ****** 2026-01-10 14:15:14.044277 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044309 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044319 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044328 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044336 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.044345 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.044353 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044361 | orchestrator | 2026-01-10 14:15:14.044370 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-10 14:15:14.044379 | orchestrator | Saturday 10 January 2026 14:14:48 +0000 (0:00:01.342) 0:07:42.713 ****** 2026-01-10 14:15:14.044409 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:14.044430 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:14.044439 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:14.044448 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:14.044456 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:14.044465 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:14.044473 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:14.044482 | orchestrator | 2026-01-10 14:15:14.044491 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-10 14:15:14.044508 | orchestrator | Saturday 10 January 2026 14:14:50 +0000 (0:00:02.002) 0:07:44.716 ****** 2026-01-10 14:15:14.044518 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:15:14.044526 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:14.044534 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:14.044543 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:14.044551 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:14.044560 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:14.044568 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:14.044577 | orchestrator | 2026-01-10 14:15:14.044585 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-10 14:15:14.044594 | orchestrator | Saturday 10 January 2026 14:14:50 +0000 (0:00:00.525) 0:07:45.241 ****** 2026-01-10 14:15:14.044603 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:15:14.044613 | orchestrator | 2026-01-10 14:15:14.044622 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-10 14:15:14.044630 | orchestrator | Saturday 10 January 2026 14:14:51 +0000 (0:00:01.005) 0:07:46.247 ****** 2026-01-10 14:15:14.044640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:15:14.044651 | orchestrator | 2026-01-10 14:15:14.044660 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-10 14:15:14.044669 | orchestrator | Saturday 10 January 2026 14:14:52 +0000 (0:00:00.788) 0:07:47.036 ****** 2026-01-10 14:15:14.044677 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044686 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044695 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.044703 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044711 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.044719 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044728 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044736 | orchestrator | 2026-01-10 14:15:14.044762 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-10 14:15:14.044771 | orchestrator | Saturday 10 January 2026 14:15:01 +0000 (0:00:09.433) 0:07:56.470 ****** 2026-01-10 14:15:14.044778 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044787 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044795 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044803 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044811 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.044830 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044838 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.044846 | orchestrator | 2026-01-10 14:15:14.044854 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-10 14:15:14.044862 | orchestrator | Saturday 10 January 2026 14:15:03 +0000 (0:00:01.106) 0:07:57.576 ****** 2026-01-10 14:15:14.044869 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044877 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044886 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044894 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044902 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.044910 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.044918 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044926 | orchestrator | 2026-01-10 14:15:14.044935 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-10 14:15:14.044943 | orchestrator | Saturday 10 January 2026 14:15:04 +0000 (0:00:01.318) 0:07:58.894 ****** 2026-01-10 14:15:14.044951 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.044959 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.044968 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.044976 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.044984 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.044992 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.045000 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.045008 | orchestrator | 2026-01-10 14:15:14.045016 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-10 14:15:14.045025 | orchestrator | Saturday 10 January 2026 14:15:06 +0000 (0:00:02.087) 0:08:00.982 ****** 2026-01-10 14:15:14.045033 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.045041 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.045050 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.045058 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.045066 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.045074 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.045081 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.045089 | orchestrator | 2026-01-10 14:15:14.045097 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-10 14:15:14.045106 | orchestrator | Saturday 10 January 2026 14:15:07 +0000 (0:00:01.294) 0:08:02.276 ****** 2026-01-10 14:15:14.045114 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.045122 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.045129 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.045136 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.045144 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.045152 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.045160 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.045167 | orchestrator | 2026-01-10 14:15:14.045175 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-10 14:15:14.045183 | orchestrator | 2026-01-10 14:15:14.045191 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-10 14:15:14.045198 | orchestrator | Saturday 10 January 2026 14:15:09 +0000 (0:00:01.280) 0:08:03.557 ****** 2026-01-10 14:15:14.045207 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:15:14.045215 | orchestrator | 2026-01-10 14:15:14.045223 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:15:14.045237 | orchestrator | Saturday 10 January 2026 14:15:09 +0000 (0:00:00.860) 0:08:04.417 ****** 2026-01-10 14:15:14.045245 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:14.045253 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:14.045261 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:14.045269 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:14.045277 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:14.045295 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:14.045303 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:14.045310 | orchestrator | 2026-01-10 14:15:14.045318 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:15:14.045326 | orchestrator | Saturday 10 January 2026 14:15:10 +0000 (0:00:01.093) 0:08:05.511 ****** 2026-01-10 14:15:14.045334 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:14.045342 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:14.045350 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:14.045358 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:14.045366 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:14.045374 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:14.045400 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:14.045410 | orchestrator | 2026-01-10 14:15:14.045418 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-10 14:15:14.045427 | orchestrator | Saturday 10 January 2026 14:15:12 +0000 (0:00:01.154) 0:08:06.666 ****** 2026-01-10 14:15:14.045435 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:15:14.045444 | orchestrator | 2026-01-10 14:15:14.045453 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:15:14.045462 | orchestrator | Saturday 10 January 2026 14:15:13 +0000 (0:00:01.022) 0:08:07.688 ****** 2026-01-10 14:15:14.045470 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:14.045478 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:14.045486 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:14.045495 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:14.045504 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:14.045513 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:14.045522 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:14.045531 | orchestrator | 2026-01-10 14:15:14.045552 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:15:15.692770 | orchestrator | Saturday 10 January 2026 14:15:14 +0000 (0:00:00.875) 0:08:08.564 ****** 2026-01-10 14:15:15.692888 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:15.692903 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:15.692913 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:15.692922 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:15.692930 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:15.692940 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:15.692949 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:15.692958 | orchestrator | 2026-01-10 14:15:15.692968 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:15:15.692978 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-10 14:15:15.692990 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:15:15.692999 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:15:15.693007 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:15:15.693016 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-10 14:15:15.693024 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:15:15.693033 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:15:15.693073 | orchestrator | 2026-01-10 14:15:15.693089 | orchestrator | 2026-01-10 14:15:15.693104 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:15:15.693118 | orchestrator | Saturday 10 January 2026 14:15:15 +0000 (0:00:01.158) 0:08:09.722 ****** 2026-01-10 14:15:15.693133 | orchestrator | =============================================================================== 2026-01-10 14:15:15.693147 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.15s 2026-01-10 14:15:15.693160 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.19s 2026-01-10 14:15:15.693173 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.44s 2026-01-10 14:15:15.693186 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.73s 2026-01-10 14:15:15.693200 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.73s 2026-01-10 14:15:15.693215 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.96s 2026-01-10 14:15:15.693228 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.93s 2026-01-10 14:15:15.693243 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.32s 2026-01-10 14:15:15.693258 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.58s 2026-01-10 14:15:15.693290 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.53s 2026-01-10 14:15:15.693305 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.43s 2026-01-10 14:15:15.693321 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.21s 2026-01-10 14:15:15.693336 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.85s 2026-01-10 14:15:15.693352 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.70s 2026-01-10 14:15:15.693368 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.22s 2026-01-10 14:15:15.693424 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.80s 2026-01-10 14:15:15.693435 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.61s 2026-01-10 14:15:15.693446 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.49s 2026-01-10 14:15:15.693456 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.83s 2026-01-10 14:15:15.693470 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.79s 2026-01-10 14:15:16.064772 | orchestrator | + osism apply fail2ban 2026-01-10 14:15:28.708649 | orchestrator | 2026-01-10 14:15:28 | INFO  | Task 6001474f-ac0e-493d-ae6a-482b81b2a4fb (fail2ban) was prepared for execution. 2026-01-10 14:15:28.708747 | orchestrator | 2026-01-10 14:15:28 | INFO  | It takes a moment until task 6001474f-ac0e-493d-ae6a-482b81b2a4fb (fail2ban) has been started and output is visible here. 2026-01-10 14:15:50.969374 | orchestrator | 2026-01-10 14:15:50.969501 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-10 14:15:50.969516 | orchestrator | 2026-01-10 14:15:50.969524 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-10 14:15:50.969532 | orchestrator | Saturday 10 January 2026 14:15:33 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-10 14:15:50.969542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:15:50.969551 | orchestrator | 2026-01-10 14:15:50.969558 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-10 14:15:50.969566 | orchestrator | Saturday 10 January 2026 14:15:34 +0000 (0:00:01.202) 0:00:01.460 ****** 2026-01-10 14:15:50.969574 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:50.969582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:50.969625 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:50.969633 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:50.969640 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:50.969647 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:50.969653 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:50.969660 | orchestrator | 2026-01-10 14:15:50.969667 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-10 14:15:50.969673 | orchestrator | Saturday 10 January 2026 14:15:45 +0000 (0:00:11.407) 0:00:12.867 ****** 2026-01-10 14:15:50.969680 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:50.969685 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:50.969691 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:50.969696 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:50.969702 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:50.969708 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:50.969715 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:50.969722 | orchestrator | 2026-01-10 14:15:50.969728 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-10 14:15:50.969735 | orchestrator | Saturday 10 January 2026 14:15:47 +0000 (0:00:01.404) 0:00:14.272 ****** 2026-01-10 14:15:50.969741 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:50.969750 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:15:50.969757 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:15:50.969764 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:15:50.969770 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:15:50.969777 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:15:50.969784 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:15:50.969791 | orchestrator | 2026-01-10 14:15:50.969797 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-10 14:15:50.969804 | orchestrator | Saturday 10 January 2026 14:15:48 +0000 (0:00:01.465) 0:00:15.737 ****** 2026-01-10 14:15:50.969811 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:50.969817 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:50.969824 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:50.969832 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:50.969838 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:50.969845 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:50.969852 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:50.969859 | orchestrator | 2026-01-10 14:15:50.969866 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:15:50.969873 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969882 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969889 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969896 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969919 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969926 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969933 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:15:50.969939 | orchestrator | 2026-01-10 14:15:50.969946 | orchestrator | 2026-01-10 14:15:50.969951 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:15:50.969958 | orchestrator | Saturday 10 January 2026 14:15:50 +0000 (0:00:01.716) 0:00:17.454 ****** 2026-01-10 14:15:50.969973 | orchestrator | =============================================================================== 2026-01-10 14:15:50.969980 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.41s 2026-01-10 14:15:50.969986 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.72s 2026-01-10 14:15:50.969992 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-01-10 14:15:50.969998 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.40s 2026-01-10 14:15:50.970004 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.20s 2026-01-10 14:15:51.297487 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 14:15:51.297601 | orchestrator | + osism apply network 2026-01-10 14:16:03.423936 | orchestrator | 2026-01-10 14:16:03 | INFO  | Task 828340e8-a20b-4084-b79a-45cdc425953f (network) was prepared for execution. 2026-01-10 14:16:03.424044 | orchestrator | 2026-01-10 14:16:03 | INFO  | It takes a moment until task 828340e8-a20b-4084-b79a-45cdc425953f (network) has been started and output is visible here. 2026-01-10 14:16:32.724112 | orchestrator | 2026-01-10 14:16:32.724242 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-10 14:16:32.724255 | orchestrator | 2026-01-10 14:16:32.724264 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-10 14:16:32.724272 | orchestrator | Saturday 10 January 2026 14:16:07 +0000 (0:00:00.246) 0:00:00.246 ****** 2026-01-10 14:16:32.724279 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.724287 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.724294 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.724302 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.724310 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.724344 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.724353 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.724360 | orchestrator | 2026-01-10 14:16:32.724367 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-10 14:16:32.724374 | orchestrator | Saturday 10 January 2026 14:16:08 +0000 (0:00:00.698) 0:00:00.945 ****** 2026-01-10 14:16:32.724383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:16:32.724392 | orchestrator | 2026-01-10 14:16:32.724399 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-10 14:16:32.724406 | orchestrator | Saturday 10 January 2026 14:16:09 +0000 (0:00:01.168) 0:00:02.114 ****** 2026-01-10 14:16:32.724413 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.724429 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.724440 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.724462 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.724473 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.724483 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.724496 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.724508 | orchestrator | 2026-01-10 14:16:32.724519 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-10 14:16:32.724532 | orchestrator | Saturday 10 January 2026 14:16:11 +0000 (0:00:02.143) 0:00:04.257 ****** 2026-01-10 14:16:32.724539 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.724545 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.724552 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.724559 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.724566 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.724572 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.724579 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.724586 | orchestrator | 2026-01-10 14:16:32.724592 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-10 14:16:32.724600 | orchestrator | Saturday 10 January 2026 14:16:13 +0000 (0:00:01.821) 0:00:06.078 ****** 2026-01-10 14:16:32.724631 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-10 14:16:32.724639 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-10 14:16:32.724646 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-10 14:16:32.724653 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-10 14:16:32.724660 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-10 14:16:32.724667 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-10 14:16:32.724675 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-10 14:16:32.724683 | orchestrator | 2026-01-10 14:16:32.724691 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-10 14:16:32.724698 | orchestrator | Saturday 10 January 2026 14:16:14 +0000 (0:00:01.035) 0:00:07.113 ****** 2026-01-10 14:16:32.724707 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:16:32.724715 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:16:32.724723 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:16:32.724730 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:16:32.724737 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:16:32.724745 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:16:32.724752 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:16:32.724760 | orchestrator | 2026-01-10 14:16:32.724780 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-10 14:16:32.724788 | orchestrator | Saturday 10 January 2026 14:16:17 +0000 (0:00:03.291) 0:00:10.404 ****** 2026-01-10 14:16:32.724796 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:32.724803 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:32.724811 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:32.724819 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:32.724826 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:32.724833 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:32.724840 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:32.724846 | orchestrator | 2026-01-10 14:16:32.724915 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-10 14:16:32.724926 | orchestrator | Saturday 10 January 2026 14:16:19 +0000 (0:00:01.637) 0:00:12.042 ****** 2026-01-10 14:16:32.724937 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:16:32.724948 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:16:32.724960 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:16:32.724971 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:16:32.724981 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:16:32.724992 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:16:32.725002 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:16:32.725009 | orchestrator | 2026-01-10 14:16:32.725016 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-10 14:16:32.725022 | orchestrator | Saturday 10 January 2026 14:16:21 +0000 (0:00:01.872) 0:00:13.915 ****** 2026-01-10 14:16:32.725029 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.725036 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.725043 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.725049 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.725056 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.725063 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.725069 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.725076 | orchestrator | 2026-01-10 14:16:32.725083 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-10 14:16:32.725104 | orchestrator | Saturday 10 January 2026 14:16:22 +0000 (0:00:01.188) 0:00:15.104 ****** 2026-01-10 14:16:32.725112 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:32.725118 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:32.725125 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:32.725132 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:32.725138 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:32.725153 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:32.725160 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:32.725166 | orchestrator | 2026-01-10 14:16:32.725173 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-10 14:16:32.725180 | orchestrator | Saturday 10 January 2026 14:16:23 +0000 (0:00:00.633) 0:00:15.737 ****** 2026-01-10 14:16:32.725187 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.725193 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.725200 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.725206 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.725213 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.725219 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.725226 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.725233 | orchestrator | 2026-01-10 14:16:32.725239 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-10 14:16:32.725246 | orchestrator | Saturday 10 January 2026 14:16:25 +0000 (0:00:02.549) 0:00:18.287 ****** 2026-01-10 14:16:32.725253 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:32.725260 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:32.725266 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:32.725273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:32.725279 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:32.725286 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:32.725294 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-10 14:16:32.725302 | orchestrator | 2026-01-10 14:16:32.725308 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-10 14:16:32.725363 | orchestrator | Saturday 10 January 2026 14:16:26 +0000 (0:00:00.881) 0:00:19.168 ****** 2026-01-10 14:16:32.725372 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.725378 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:32.725385 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:32.725392 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:32.725398 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:32.725405 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:32.725411 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:32.725418 | orchestrator | 2026-01-10 14:16:32.725424 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-10 14:16:32.725431 | orchestrator | Saturday 10 January 2026 14:16:28 +0000 (0:00:01.716) 0:00:20.884 ****** 2026-01-10 14:16:32.725438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:16:32.725446 | orchestrator | 2026-01-10 14:16:32.725453 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:16:32.725460 | orchestrator | Saturday 10 January 2026 14:16:29 +0000 (0:00:01.219) 0:00:22.104 ****** 2026-01-10 14:16:32.725466 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.725473 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.725479 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.725486 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.725492 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.725499 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.725505 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.725512 | orchestrator | 2026-01-10 14:16:32.725519 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-10 14:16:32.725525 | orchestrator | Saturday 10 January 2026 14:16:30 +0000 (0:00:01.177) 0:00:23.282 ****** 2026-01-10 14:16:32.725532 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:32.725574 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:32.725582 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:32.725589 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:32.725596 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:32.725609 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:32.725616 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:32.725622 | orchestrator | 2026-01-10 14:16:32.725629 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:16:32.725636 | orchestrator | Saturday 10 January 2026 14:16:31 +0000 (0:00:00.704) 0:00:23.987 ****** 2026-01-10 14:16:32.725643 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725649 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725656 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725663 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725669 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725676 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725682 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725689 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725695 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725702 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725708 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725715 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725722 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:16:32.725728 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:16:32.725735 | orchestrator | 2026-01-10 14:16:32.725747 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-10 14:16:48.445537 | orchestrator | Saturday 10 January 2026 14:16:32 +0000 (0:00:01.319) 0:00:25.307 ****** 2026-01-10 14:16:48.445723 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:48.445741 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:48.445754 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:48.445782 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:48.445794 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:48.445818 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:48.445829 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:48.445841 | orchestrator | 2026-01-10 14:16:48.445855 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-10 14:16:48.445867 | orchestrator | Saturday 10 January 2026 14:16:33 +0000 (0:00:00.645) 0:00:25.953 ****** 2026-01-10 14:16:48.445899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-5, testbed-node-2, testbed-node-0, testbed-manager, testbed-node-3, testbed-node-4 2026-01-10 14:16:48.445914 | orchestrator | 2026-01-10 14:16:48.445925 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-10 14:16:48.445937 | orchestrator | Saturday 10 January 2026 14:16:37 +0000 (0:00:04.593) 0:00:30.546 ****** 2026-01-10 14:16:48.445950 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.445964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.445976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446064 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446216 | orchestrator | 2026-01-10 14:16:48.446227 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-10 14:16:48.446238 | orchestrator | Saturday 10 January 2026 14:16:43 +0000 (0:00:05.443) 0:00:35.990 ****** 2026-01-10 14:16:48.446249 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446323 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:16:48.446375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:16:48.446427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:17:01.548033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:17:01.548184 | orchestrator | 2026-01-10 14:17:01.548198 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-10 14:17:01.548209 | orchestrator | Saturday 10 January 2026 14:16:48 +0000 (0:00:05.032) 0:00:41.022 ****** 2026-01-10 14:17:01.548221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:17:01.548262 | orchestrator | 2026-01-10 14:17:01.548271 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:17:01.548280 | orchestrator | Saturday 10 January 2026 14:16:49 +0000 (0:00:01.096) 0:00:42.119 ****** 2026-01-10 14:17:01.548331 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.548341 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.548349 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.548357 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.548365 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.548372 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.548380 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.548388 | orchestrator | 2026-01-10 14:17:01.548396 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:17:01.548404 | orchestrator | Saturday 10 January 2026 14:16:50 +0000 (0:00:01.077) 0:00:43.197 ****** 2026-01-10 14:17:01.548412 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548421 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548429 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548436 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548444 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548452 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548460 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548467 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548475 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.548484 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548492 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548499 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548507 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548515 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.548523 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548547 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548555 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548563 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.548578 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548586 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548593 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548601 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548609 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.548616 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548624 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548632 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548647 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548655 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.548663 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.548671 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:17:01.548679 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:17:01.548686 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:17:01.548694 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:17:01.548701 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.548709 | orchestrator | 2026-01-10 14:17:01.548717 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-10 14:17:01.548743 | orchestrator | Saturday 10 January 2026 14:16:51 +0000 (0:00:01.018) 0:00:44.215 ****** 2026-01-10 14:17:01.548752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:17:01.548761 | orchestrator | 2026-01-10 14:17:01.548769 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-10 14:17:01.548776 | orchestrator | Saturday 10 January 2026 14:16:52 +0000 (0:00:01.292) 0:00:45.507 ****** 2026-01-10 14:17:01.548784 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.548792 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.548799 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.548807 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.548815 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.548822 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.548830 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.548837 | orchestrator | 2026-01-10 14:17:01.548845 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-10 14:17:01.548853 | orchestrator | Saturday 10 January 2026 14:16:53 +0000 (0:00:00.624) 0:00:46.132 ****** 2026-01-10 14:17:01.548861 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.548868 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.548876 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.548884 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.548891 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.548899 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.548906 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.548914 | orchestrator | 2026-01-10 14:17:01.548922 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-10 14:17:01.548930 | orchestrator | Saturday 10 January 2026 14:16:54 +0000 (0:00:00.802) 0:00:46.934 ****** 2026-01-10 14:17:01.548937 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.548945 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.548952 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.548960 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.548968 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.548975 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.548983 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.548991 | orchestrator | 2026-01-10 14:17:01.548998 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-10 14:17:01.549006 | orchestrator | Saturday 10 January 2026 14:16:54 +0000 (0:00:00.616) 0:00:47.551 ****** 2026-01-10 14:17:01.549014 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.549022 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.549029 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.549037 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.549044 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.549052 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.549066 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.549074 | orchestrator | 2026-01-10 14:17:01.549082 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-10 14:17:01.549089 | orchestrator | Saturday 10 January 2026 14:16:56 +0000 (0:00:01.809) 0:00:49.360 ****** 2026-01-10 14:17:01.549097 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.549105 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.549113 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.549120 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.549128 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.549135 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.549143 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.549151 | orchestrator | 2026-01-10 14:17:01.549159 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-10 14:17:01.549166 | orchestrator | Saturday 10 January 2026 14:16:57 +0000 (0:00:00.997) 0:00:50.358 ****** 2026-01-10 14:17:01.549178 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.549186 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.549194 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.549202 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.549209 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.549217 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.549224 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.549232 | orchestrator | 2026-01-10 14:17:01.549240 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-10 14:17:01.549248 | orchestrator | Saturday 10 January 2026 14:17:00 +0000 (0:00:02.382) 0:00:52.740 ****** 2026-01-10 14:17:01.549255 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.549263 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.549271 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.549278 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.549329 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.549339 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.549346 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.549354 | orchestrator | 2026-01-10 14:17:01.549362 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-10 14:17:01.549370 | orchestrator | Saturday 10 January 2026 14:17:00 +0000 (0:00:00.836) 0:00:53.577 ****** 2026-01-10 14:17:01.549377 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.549386 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.549394 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.549401 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.549409 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.549417 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.549424 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.549432 | orchestrator | 2026-01-10 14:17:01.549440 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:17:01.549450 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:17:01.549460 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.549474 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.976772 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.976887 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.976896 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.976939 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:17:01.976946 | orchestrator | 2026-01-10 14:17:01.976955 | orchestrator | 2026-01-10 14:17:01.976963 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:17:01.976972 | orchestrator | Saturday 10 January 2026 14:17:01 +0000 (0:00:00.556) 0:00:54.134 ****** 2026-01-10 14:17:01.976979 | orchestrator | =============================================================================== 2026-01-10 14:17:01.976986 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.44s 2026-01-10 14:17:01.976994 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.03s 2026-01-10 14:17:01.977000 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.59s 2026-01-10 14:17:01.977006 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.29s 2026-01-10 14:17:01.977013 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.55s 2026-01-10 14:17:01.977019 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.38s 2026-01-10 14:17:01.977025 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.14s 2026-01-10 14:17:01.977030 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2026-01-10 14:17:01.977035 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-01-10 14:17:01.977041 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.81s 2026-01-10 14:17:01.977047 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-01-10 14:17:01.977052 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.64s 2026-01-10 14:17:01.977058 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2026-01-10 14:17:01.977064 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.29s 2026-01-10 14:17:01.977069 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-01-10 14:17:01.977075 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-01-10 14:17:01.977081 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-01-10 14:17:01.977087 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2026-01-10 14:17:01.977092 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-01-10 14:17:01.977117 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2026-01-10 14:17:02.296657 | orchestrator | + osism apply wireguard 2026-01-10 14:17:14.482506 | orchestrator | 2026-01-10 14:17:14 | INFO  | Task 0815f2ea-5b14-412d-8f3f-0a33c20d76af (wireguard) was prepared for execution. 2026-01-10 14:17:14.482749 | orchestrator | 2026-01-10 14:17:14 | INFO  | It takes a moment until task 0815f2ea-5b14-412d-8f3f-0a33c20d76af (wireguard) has been started and output is visible here. 2026-01-10 14:17:35.145324 | orchestrator | 2026-01-10 14:17:35.145513 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-10 14:17:35.145534 | orchestrator | 2026-01-10 14:17:35.145546 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-10 14:17:35.145558 | orchestrator | Saturday 10 January 2026 14:17:18 +0000 (0:00:00.221) 0:00:00.221 ****** 2026-01-10 14:17:35.145570 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:35.145582 | orchestrator | 2026-01-10 14:17:35.145593 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-10 14:17:35.145609 | orchestrator | Saturday 10 January 2026 14:17:20 +0000 (0:00:01.630) 0:00:01.852 ****** 2026-01-10 14:17:35.145621 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.145633 | orchestrator | 2026-01-10 14:17:35.145644 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-10 14:17:35.145690 | orchestrator | Saturday 10 January 2026 14:17:27 +0000 (0:00:06.930) 0:00:08.783 ****** 2026-01-10 14:17:35.145702 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.145713 | orchestrator | 2026-01-10 14:17:35.145724 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-10 14:17:35.145735 | orchestrator | Saturday 10 January 2026 14:17:27 +0000 (0:00:00.550) 0:00:09.334 ****** 2026-01-10 14:17:35.145745 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.145756 | orchestrator | 2026-01-10 14:17:35.145767 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-10 14:17:35.145778 | orchestrator | Saturday 10 January 2026 14:17:28 +0000 (0:00:00.445) 0:00:09.780 ****** 2026-01-10 14:17:35.145789 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:35.145800 | orchestrator | 2026-01-10 14:17:35.145811 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-10 14:17:35.145822 | orchestrator | Saturday 10 January 2026 14:17:28 +0000 (0:00:00.675) 0:00:10.455 ****** 2026-01-10 14:17:35.145833 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:35.145844 | orchestrator | 2026-01-10 14:17:35.145854 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-10 14:17:35.145865 | orchestrator | Saturday 10 January 2026 14:17:29 +0000 (0:00:00.448) 0:00:10.904 ****** 2026-01-10 14:17:35.145876 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:35.145887 | orchestrator | 2026-01-10 14:17:35.145898 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-10 14:17:35.145909 | orchestrator | Saturday 10 January 2026 14:17:29 +0000 (0:00:00.415) 0:00:11.319 ****** 2026-01-10 14:17:35.145920 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.145931 | orchestrator | 2026-01-10 14:17:35.145942 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-10 14:17:35.145953 | orchestrator | Saturday 10 January 2026 14:17:31 +0000 (0:00:01.219) 0:00:12.539 ****** 2026-01-10 14:17:35.145964 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:17:35.145975 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.145986 | orchestrator | 2026-01-10 14:17:35.145997 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-10 14:17:35.146008 | orchestrator | Saturday 10 January 2026 14:17:32 +0000 (0:00:00.955) 0:00:13.495 ****** 2026-01-10 14:17:35.146077 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.146089 | orchestrator | 2026-01-10 14:17:35.146100 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-10 14:17:35.146112 | orchestrator | Saturday 10 January 2026 14:17:33 +0000 (0:00:01.788) 0:00:15.284 ****** 2026-01-10 14:17:35.146123 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:35.146138 | orchestrator | 2026-01-10 14:17:35.146157 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:17:35.146175 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:17:35.146194 | orchestrator | 2026-01-10 14:17:35.146215 | orchestrator | 2026-01-10 14:17:35.146227 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:17:35.146238 | orchestrator | Saturday 10 January 2026 14:17:34 +0000 (0:00:00.948) 0:00:16.232 ****** 2026-01-10 14:17:35.146365 | orchestrator | =============================================================================== 2026-01-10 14:17:35.146377 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.93s 2026-01-10 14:17:35.146389 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2026-01-10 14:17:35.146400 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.63s 2026-01-10 14:17:35.146411 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-01-10 14:17:35.146423 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-01-10 14:17:35.146435 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-01-10 14:17:35.146458 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-10 14:17:35.146469 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-01-10 14:17:35.146480 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-01-10 14:17:35.146491 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-10 14:17:35.146503 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-01-10 14:17:35.451998 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-10 14:17:35.485554 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-10 14:17:35.485665 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-10 14:17:35.567712 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 174 0 --:--:-- --:--:-- --:--:-- 177 2026-01-10 14:17:35.580891 | orchestrator | + osism apply --environment custom workarounds 2026-01-10 14:17:37.675594 | orchestrator | 2026-01-10 14:17:37 | INFO  | Trying to run play workarounds in environment custom 2026-01-10 14:17:47.923476 | orchestrator | 2026-01-10 14:17:47 | INFO  | Task a269d4ef-3462-48c7-baed-d1c4794d2e25 (workarounds) was prepared for execution. 2026-01-10 14:17:47.923594 | orchestrator | 2026-01-10 14:17:47 | INFO  | It takes a moment until task a269d4ef-3462-48c7-baed-d1c4794d2e25 (workarounds) has been started and output is visible here. 2026-01-10 14:18:12.682468 | orchestrator | 2026-01-10 14:18:12.682586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:18:12.682602 | orchestrator | 2026-01-10 14:18:12.682613 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-10 14:18:12.682624 | orchestrator | Saturday 10 January 2026 14:17:52 +0000 (0:00:00.139) 0:00:00.139 ****** 2026-01-10 14:18:12.682635 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682646 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682656 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682666 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682676 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682686 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682695 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-10 14:18:12.682705 | orchestrator | 2026-01-10 14:18:12.682715 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-10 14:18:12.682725 | orchestrator | 2026-01-10 14:18:12.682734 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:18:12.682744 | orchestrator | Saturday 10 January 2026 14:17:53 +0000 (0:00:00.800) 0:00:00.940 ****** 2026-01-10 14:18:12.682754 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:12.682765 | orchestrator | 2026-01-10 14:18:12.682775 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-10 14:18:12.682785 | orchestrator | 2026-01-10 14:18:12.682795 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:18:12.682805 | orchestrator | Saturday 10 January 2026 14:17:55 +0000 (0:00:02.326) 0:00:03.267 ****** 2026-01-10 14:18:12.682815 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:12.682824 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:12.682834 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:12.682843 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:12.682853 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:12.682863 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:12.682872 | orchestrator | 2026-01-10 14:18:12.682882 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-10 14:18:12.682918 | orchestrator | 2026-01-10 14:18:12.682928 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-10 14:18:12.682938 | orchestrator | Saturday 10 January 2026 14:17:57 +0000 (0:00:01.807) 0:00:05.075 ****** 2026-01-10 14:18:12.682948 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.682959 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.682969 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.682979 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.682989 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.682998 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:18:12.683008 | orchestrator | 2026-01-10 14:18:12.683017 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-10 14:18:12.683027 | orchestrator | Saturday 10 January 2026 14:17:59 +0000 (0:00:01.599) 0:00:06.674 ****** 2026-01-10 14:18:12.683037 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:12.683047 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:12.683056 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:12.683066 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:12.683075 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:12.683085 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:12.683094 | orchestrator | 2026-01-10 14:18:12.683104 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-10 14:18:12.683114 | orchestrator | Saturday 10 January 2026 14:18:02 +0000 (0:00:03.412) 0:00:10.086 ****** 2026-01-10 14:18:12.683123 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:12.683133 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:12.683142 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:12.683152 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:12.683162 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:12.683171 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:12.683181 | orchestrator | 2026-01-10 14:18:12.683210 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-10 14:18:12.683221 | orchestrator | 2026-01-10 14:18:12.683232 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-10 14:18:12.683241 | orchestrator | Saturday 10 January 2026 14:18:03 +0000 (0:00:00.668) 0:00:10.755 ****** 2026-01-10 14:18:12.683251 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:12.683261 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:12.683270 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:12.683280 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:12.683289 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:12.683298 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:12.683308 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:12.683317 | orchestrator | 2026-01-10 14:18:12.683327 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-10 14:18:12.683337 | orchestrator | Saturday 10 January 2026 14:18:04 +0000 (0:00:01.500) 0:00:12.255 ****** 2026-01-10 14:18:12.683346 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:12.683356 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:12.683366 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:12.683375 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:12.683385 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:12.683394 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:12.683423 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:12.683449 | orchestrator | 2026-01-10 14:18:12.683467 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-10 14:18:12.683477 | orchestrator | Saturday 10 January 2026 14:18:06 +0000 (0:00:01.530) 0:00:13.785 ****** 2026-01-10 14:18:12.683487 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:12.683496 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:12.683506 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:12.683515 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:12.683524 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:12.683534 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:12.683543 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:12.683553 | orchestrator | 2026-01-10 14:18:12.683562 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-10 14:18:12.683572 | orchestrator | Saturday 10 January 2026 14:18:07 +0000 (0:00:01.571) 0:00:15.357 ****** 2026-01-10 14:18:12.683582 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:12.683591 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:12.683600 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:12.683610 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:12.683619 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:12.683629 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:12.683638 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:12.683648 | orchestrator | 2026-01-10 14:18:12.683657 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-10 14:18:12.683667 | orchestrator | Saturday 10 January 2026 14:18:09 +0000 (0:00:01.712) 0:00:17.070 ****** 2026-01-10 14:18:12.683676 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:12.683686 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:12.683695 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:12.683704 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:12.683714 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:12.683723 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:12.683733 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:12.683742 | orchestrator | 2026-01-10 14:18:12.683752 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-10 14:18:12.683761 | orchestrator | 2026-01-10 14:18:12.683771 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-10 14:18:12.683781 | orchestrator | Saturday 10 January 2026 14:18:10 +0000 (0:00:00.591) 0:00:17.661 ****** 2026-01-10 14:18:12.683790 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:12.683800 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:12.683809 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:12.683819 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:12.683828 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:12.683837 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:12.683847 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:12.683856 | orchestrator | 2026-01-10 14:18:12.683866 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:18:12.683877 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:18:12.683888 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683898 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683907 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683917 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683926 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683942 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:12.683952 | orchestrator | 2026-01-10 14:18:12.683962 | orchestrator | 2026-01-10 14:18:12.683971 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:18:12.683981 | orchestrator | Saturday 10 January 2026 14:18:12 +0000 (0:00:02.645) 0:00:20.307 ****** 2026-01-10 14:18:12.683991 | orchestrator | =============================================================================== 2026-01-10 14:18:12.684000 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.41s 2026-01-10 14:18:12.684014 | orchestrator | Install python3-docker -------------------------------------------------- 2.65s 2026-01-10 14:18:12.684024 | orchestrator | Apply netplan configuration --------------------------------------------- 2.33s 2026-01-10 14:18:12.684033 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2026-01-10 14:18:12.684043 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2026-01-10 14:18:12.684052 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2026-01-10 14:18:12.684061 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-01-10 14:18:12.684071 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.53s 2026-01-10 14:18:12.684080 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.50s 2026-01-10 14:18:12.684090 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2026-01-10 14:18:12.684099 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2026-01-10 14:18:12.684114 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2026-01-10 14:18:13.291897 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:18:25.363480 | orchestrator | 2026-01-10 14:18:25 | INFO  | Task 7878c53a-a994-4905-9c42-f57049155482 (reboot) was prepared for execution. 2026-01-10 14:18:25.363605 | orchestrator | 2026-01-10 14:18:25 | INFO  | It takes a moment until task 7878c53a-a994-4905-9c42-f57049155482 (reboot) has been started and output is visible here. 2026-01-10 14:18:35.668060 | orchestrator | 2026-01-10 14:18:35.668201 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668215 | orchestrator | 2026-01-10 14:18:35.668223 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668231 | orchestrator | Saturday 10 January 2026 14:18:29 +0000 (0:00:00.194) 0:00:00.194 ****** 2026-01-10 14:18:35.668239 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:35.668248 | orchestrator | 2026-01-10 14:18:35.668255 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668263 | orchestrator | Saturday 10 January 2026 14:18:29 +0000 (0:00:00.124) 0:00:00.319 ****** 2026-01-10 14:18:35.668270 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:35.668277 | orchestrator | 2026-01-10 14:18:35.668285 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668293 | orchestrator | Saturday 10 January 2026 14:18:30 +0000 (0:00:00.939) 0:00:01.258 ****** 2026-01-10 14:18:35.668300 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:35.668307 | orchestrator | 2026-01-10 14:18:35.668337 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668345 | orchestrator | 2026-01-10 14:18:35.668352 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668360 | orchestrator | Saturday 10 January 2026 14:18:30 +0000 (0:00:00.116) 0:00:01.375 ****** 2026-01-10 14:18:35.668367 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:35.668374 | orchestrator | 2026-01-10 14:18:35.668381 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668419 | orchestrator | Saturday 10 January 2026 14:18:30 +0000 (0:00:00.096) 0:00:01.472 ****** 2026-01-10 14:18:35.668427 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:35.668434 | orchestrator | 2026-01-10 14:18:35.668442 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668449 | orchestrator | Saturday 10 January 2026 14:18:31 +0000 (0:00:00.701) 0:00:02.173 ****** 2026-01-10 14:18:35.668456 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:35.668464 | orchestrator | 2026-01-10 14:18:35.668471 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668478 | orchestrator | 2026-01-10 14:18:35.668485 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668493 | orchestrator | Saturday 10 January 2026 14:18:31 +0000 (0:00:00.122) 0:00:02.295 ****** 2026-01-10 14:18:35.668500 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:35.668507 | orchestrator | 2026-01-10 14:18:35.668514 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668522 | orchestrator | Saturday 10 January 2026 14:18:31 +0000 (0:00:00.194) 0:00:02.490 ****** 2026-01-10 14:18:35.668530 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:35.668537 | orchestrator | 2026-01-10 14:18:35.668544 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668551 | orchestrator | Saturday 10 January 2026 14:18:32 +0000 (0:00:00.678) 0:00:03.168 ****** 2026-01-10 14:18:35.668558 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:35.668566 | orchestrator | 2026-01-10 14:18:35.668573 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668580 | orchestrator | 2026-01-10 14:18:35.668587 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668595 | orchestrator | Saturday 10 January 2026 14:18:32 +0000 (0:00:00.115) 0:00:03.284 ****** 2026-01-10 14:18:35.668602 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:35.668610 | orchestrator | 2026-01-10 14:18:35.668617 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668626 | orchestrator | Saturday 10 January 2026 14:18:32 +0000 (0:00:00.115) 0:00:03.400 ****** 2026-01-10 14:18:35.668634 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:35.668642 | orchestrator | 2026-01-10 14:18:35.668650 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668659 | orchestrator | Saturday 10 January 2026 14:18:33 +0000 (0:00:00.704) 0:00:04.104 ****** 2026-01-10 14:18:35.668667 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:35.668675 | orchestrator | 2026-01-10 14:18:35.668683 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668691 | orchestrator | 2026-01-10 14:18:35.668712 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668720 | orchestrator | Saturday 10 January 2026 14:18:33 +0000 (0:00:00.114) 0:00:04.218 ****** 2026-01-10 14:18:35.668728 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:35.668737 | orchestrator | 2026-01-10 14:18:35.668745 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668753 | orchestrator | Saturday 10 January 2026 14:18:33 +0000 (0:00:00.096) 0:00:04.315 ****** 2026-01-10 14:18:35.668761 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:35.668769 | orchestrator | 2026-01-10 14:18:35.668777 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668786 | orchestrator | Saturday 10 January 2026 14:18:34 +0000 (0:00:00.752) 0:00:05.067 ****** 2026-01-10 14:18:35.668794 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:35.668802 | orchestrator | 2026-01-10 14:18:35.668811 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:18:35.668819 | orchestrator | 2026-01-10 14:18:35.668828 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:18:35.668836 | orchestrator | Saturday 10 January 2026 14:18:34 +0000 (0:00:00.122) 0:00:05.190 ****** 2026-01-10 14:18:35.668851 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:35.668860 | orchestrator | 2026-01-10 14:18:35.668868 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:18:35.668876 | orchestrator | Saturday 10 January 2026 14:18:34 +0000 (0:00:00.101) 0:00:05.292 ****** 2026-01-10 14:18:35.668885 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:35.668893 | orchestrator | 2026-01-10 14:18:35.668901 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:18:35.668910 | orchestrator | Saturday 10 January 2026 14:18:35 +0000 (0:00:00.687) 0:00:05.979 ****** 2026-01-10 14:18:35.668930 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:35.668939 | orchestrator | 2026-01-10 14:18:35.668948 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:18:35.668958 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.668967 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.668975 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.668982 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.668989 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.668996 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:18:35.669003 | orchestrator | 2026-01-10 14:18:35.669011 | orchestrator | 2026-01-10 14:18:35.669018 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:18:35.669025 | orchestrator | Saturday 10 January 2026 14:18:35 +0000 (0:00:00.044) 0:00:06.024 ****** 2026-01-10 14:18:35.669032 | orchestrator | =============================================================================== 2026-01-10 14:18:35.669039 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.46s 2026-01-10 14:18:35.669047 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.73s 2026-01-10 14:18:35.669054 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-01-10 14:18:35.997276 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:18:48.123353 | orchestrator | 2026-01-10 14:18:48 | INFO  | Task 80232e15-8ccc-4911-bb1f-fd6cfadba39f (wait-for-connection) was prepared for execution. 2026-01-10 14:18:48.123418 | orchestrator | 2026-01-10 14:18:48 | INFO  | It takes a moment until task 80232e15-8ccc-4911-bb1f-fd6cfadba39f (wait-for-connection) has been started and output is visible here. 2026-01-10 14:19:04.551743 | orchestrator | 2026-01-10 14:19:04.551796 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-10 14:19:04.551802 | orchestrator | 2026-01-10 14:19:04.551807 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-10 14:19:04.551811 | orchestrator | Saturday 10 January 2026 14:18:52 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-01-10 14:19:04.551815 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:04.551820 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:04.551825 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:04.551829 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:04.551832 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:04.551836 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:04.551840 | orchestrator | 2026-01-10 14:19:04.551844 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:04.551860 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551866 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551874 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551878 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551882 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551886 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:04.551890 | orchestrator | 2026-01-10 14:19:04.551893 | orchestrator | 2026-01-10 14:19:04.551897 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:04.551901 | orchestrator | Saturday 10 January 2026 14:19:04 +0000 (0:00:11.664) 0:00:11.940 ****** 2026-01-10 14:19:04.551905 | orchestrator | =============================================================================== 2026-01-10 14:19:04.551909 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.66s 2026-01-10 14:19:04.848468 | orchestrator | + osism apply hddtemp 2026-01-10 14:19:16.902294 | orchestrator | 2026-01-10 14:19:16 | INFO  | Task 00373f59-e91f-4cfb-ad70-ca61d8629dea (hddtemp) was prepared for execution. 2026-01-10 14:19:16.902402 | orchestrator | 2026-01-10 14:19:16 | INFO  | It takes a moment until task 00373f59-e91f-4cfb-ad70-ca61d8629dea (hddtemp) has been started and output is visible here. 2026-01-10 14:19:45.671785 | orchestrator | 2026-01-10 14:19:45.671863 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-10 14:19:45.671870 | orchestrator | 2026-01-10 14:19:45.671874 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-10 14:19:45.671879 | orchestrator | Saturday 10 January 2026 14:19:21 +0000 (0:00:00.187) 0:00:00.187 ****** 2026-01-10 14:19:45.671883 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:45.671888 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:45.671892 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:45.671896 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:45.671900 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:45.671904 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:45.671907 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:45.671911 | orchestrator | 2026-01-10 14:19:45.671915 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-10 14:19:45.671919 | orchestrator | Saturday 10 January 2026 14:19:21 +0000 (0:00:00.528) 0:00:00.716 ****** 2026-01-10 14:19:45.671925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:19:45.671931 | orchestrator | 2026-01-10 14:19:45.671934 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-10 14:19:45.671938 | orchestrator | Saturday 10 January 2026 14:19:22 +0000 (0:00:01.081) 0:00:01.797 ****** 2026-01-10 14:19:45.671942 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:45.671946 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:45.671950 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:45.671954 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:45.671958 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:45.671961 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:45.671965 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:45.671969 | orchestrator | 2026-01-10 14:19:45.671973 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-10 14:19:45.671993 | orchestrator | Saturday 10 January 2026 14:19:24 +0000 (0:00:02.035) 0:00:03.832 ****** 2026-01-10 14:19:45.671997 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:45.672001 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:45.672005 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:45.672009 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:45.672012 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:45.672016 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:45.672020 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:45.672023 | orchestrator | 2026-01-10 14:19:45.672027 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-10 14:19:45.672031 | orchestrator | Saturday 10 January 2026 14:19:25 +0000 (0:00:01.007) 0:00:04.840 ****** 2026-01-10 14:19:45.672035 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:45.672039 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:45.672042 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:45.672046 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:45.672050 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:45.672053 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:45.672057 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:45.672084 | orchestrator | 2026-01-10 14:19:45.672088 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-10 14:19:45.672092 | orchestrator | Saturday 10 January 2026 14:19:26 +0000 (0:00:01.119) 0:00:05.960 ****** 2026-01-10 14:19:45.672096 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:45.672100 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:45.672104 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:45.672107 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:45.672111 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:45.672115 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:45.672119 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:45.672122 | orchestrator | 2026-01-10 14:19:45.672126 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-10 14:19:45.672130 | orchestrator | Saturday 10 January 2026 14:19:27 +0000 (0:00:00.803) 0:00:06.763 ****** 2026-01-10 14:19:45.672134 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:45.672137 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:45.672141 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:45.672145 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:45.672148 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:45.672152 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:45.672156 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:45.672160 | orchestrator | 2026-01-10 14:19:45.672174 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-10 14:19:45.672178 | orchestrator | Saturday 10 January 2026 14:19:42 +0000 (0:00:14.483) 0:00:21.247 ****** 2026-01-10 14:19:45.672182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:19:45.672186 | orchestrator | 2026-01-10 14:19:45.672190 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-10 14:19:45.672193 | orchestrator | Saturday 10 January 2026 14:19:43 +0000 (0:00:01.205) 0:00:22.453 ****** 2026-01-10 14:19:45.672197 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:45.672201 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:19:45.672205 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:19:45.672208 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:19:45.672212 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:19:45.672216 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:19:45.672220 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:19:45.672223 | orchestrator | 2026-01-10 14:19:45.672227 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:45.672235 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:45.672248 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672252 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672256 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672260 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672264 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672267 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:19:45.672271 | orchestrator | 2026-01-10 14:19:45.672275 | orchestrator | 2026-01-10 14:19:45.672279 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:45.672282 | orchestrator | Saturday 10 January 2026 14:19:45 +0000 (0:00:01.949) 0:00:24.403 ****** 2026-01-10 14:19:45.672286 | orchestrator | =============================================================================== 2026-01-10 14:19:45.672290 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.48s 2026-01-10 14:19:45.672294 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.04s 2026-01-10 14:19:45.672297 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2026-01-10 14:19:45.672301 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-01-10 14:19:45.672305 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2026-01-10 14:19:45.672309 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.08s 2026-01-10 14:19:45.672312 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2026-01-10 14:19:45.672316 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2026-01-10 14:19:45.672320 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2026-01-10 14:19:45.979801 | orchestrator | ++ semver latest 7.1.1 2026-01-10 14:19:46.037487 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:19:46.037574 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:19:46.037589 | orchestrator | + sudo systemctl restart manager.service 2026-01-10 14:20:00.204573 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:20:00.204676 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:20:00.204691 | orchestrator | + local max_attempts=60 2026-01-10 14:20:00.204705 | orchestrator | + local name=ceph-ansible 2026-01-10 14:20:00.204716 | orchestrator | + local attempt_num=1 2026-01-10 14:20:00.204728 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:00.252899 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:00.252978 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:00.253242 | orchestrator | + sleep 5 2026-01-10 14:20:05.257966 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:05.363114 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:05.363197 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:05.363213 | orchestrator | + sleep 5 2026-01-10 14:20:10.367175 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:10.407085 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:10.407198 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:10.407212 | orchestrator | + sleep 5 2026-01-10 14:20:15.412686 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:15.450920 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:15.451078 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:15.451094 | orchestrator | + sleep 5 2026-01-10 14:20:20.455543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:20.498217 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:20.498313 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:20.498336 | orchestrator | + sleep 5 2026-01-10 14:20:25.502390 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:25.542623 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:25.542714 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:25.542728 | orchestrator | + sleep 5 2026-01-10 14:20:30.547105 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:30.583937 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:30.584070 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:30.584088 | orchestrator | + sleep 5 2026-01-10 14:20:35.592970 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:35.634131 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:35.634225 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:35.634239 | orchestrator | + sleep 5 2026-01-10 14:20:40.636570 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:40.693542 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:40.693630 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:40.693642 | orchestrator | + sleep 5 2026-01-10 14:20:45.696572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:45.736462 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:45.736564 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:45.736579 | orchestrator | + sleep 5 2026-01-10 14:20:50.739637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:50.777806 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:50.777895 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:50.777909 | orchestrator | + sleep 5 2026-01-10 14:20:55.782313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:20:55.820847 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:20:55.820926 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:20:55.820937 | orchestrator | + sleep 5 2026-01-10 14:21:00.825739 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:21:00.876319 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:21:00.876442 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:21:00.876469 | orchestrator | + sleep 5 2026-01-10 14:21:05.881333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:21:05.919095 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:21:05.919194 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:21:05.919211 | orchestrator | + local max_attempts=60 2026-01-10 14:21:05.919225 | orchestrator | + local name=kolla-ansible 2026-01-10 14:21:05.919237 | orchestrator | + local attempt_num=1 2026-01-10 14:21:05.919785 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:21:05.956130 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:21:05.956205 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:21:05.956219 | orchestrator | + local max_attempts=60 2026-01-10 14:21:05.956232 | orchestrator | + local name=osism-ansible 2026-01-10 14:21:05.956244 | orchestrator | + local attempt_num=1 2026-01-10 14:21:05.956763 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:21:05.996382 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:21:05.996459 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:21:05.996473 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:21:06.154862 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-10 14:21:06.308900 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-10 14:21:06.467830 | orchestrator | ARA in osism-ansible already disabled. 2026-01-10 14:21:06.610265 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-10 14:21:06.611191 | orchestrator | + osism apply gather-facts 2026-01-10 14:21:18.887447 | orchestrator | 2026-01-10 14:21:18 | INFO  | Task 6db97ca6-7bec-4b96-8b61-da6a2a0d0663 (gather-facts) was prepared for execution. 2026-01-10 14:21:18.887551 | orchestrator | 2026-01-10 14:21:18 | INFO  | It takes a moment until task 6db97ca6-7bec-4b96-8b61-da6a2a0d0663 (gather-facts) has been started and output is visible here. 2026-01-10 14:21:32.535503 | orchestrator | 2026-01-10 14:21:32.535599 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:21:32.535615 | orchestrator | 2026-01-10 14:21:32.535628 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:21:32.535640 | orchestrator | Saturday 10 January 2026 14:21:22 +0000 (0:00:00.198) 0:00:00.198 ****** 2026-01-10 14:21:32.535652 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:21:32.535663 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:21:32.535674 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:21:32.535685 | orchestrator | ok: [testbed-manager] 2026-01-10 14:21:32.535696 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:21:32.535781 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:21:32.535795 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:21:32.535806 | orchestrator | 2026-01-10 14:21:32.535818 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:21:32.535828 | orchestrator | 2026-01-10 14:21:32.535840 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:21:32.535851 | orchestrator | Saturday 10 January 2026 14:21:31 +0000 (0:00:08.589) 0:00:08.788 ****** 2026-01-10 14:21:32.535862 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:21:32.535901 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:21:32.535913 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:21:32.535924 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:21:32.535935 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:21:32.536013 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:21:32.536029 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:21:32.536040 | orchestrator | 2026-01-10 14:21:32.536051 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:21:32.536063 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536077 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536090 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536102 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536127 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536141 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536153 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:21:32.536166 | orchestrator | 2026-01-10 14:21:32.536195 | orchestrator | 2026-01-10 14:21:32.536207 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:21:32.536220 | orchestrator | Saturday 10 January 2026 14:21:32 +0000 (0:00:00.528) 0:00:09.317 ****** 2026-01-10 14:21:32.536233 | orchestrator | =============================================================================== 2026-01-10 14:21:32.536244 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.59s 2026-01-10 14:21:32.536256 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-01-10 14:21:32.833418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-10 14:21:32.848332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-10 14:21:32.869138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-10 14:21:32.879995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-10 14:21:32.891203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-10 14:21:32.910489 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-10 14:21:32.923400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-10 14:21:32.941356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-10 14:21:32.955802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-10 14:21:32.972580 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-10 14:21:32.988140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-10 14:21:33.002969 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-10 14:21:33.020239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-10 14:21:33.038913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-10 14:21:33.058272 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-10 14:21:33.077300 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-10 14:21:33.096512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-10 14:21:33.111596 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-10 14:21:33.129854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-10 14:21:33.151272 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-10 14:21:33.165712 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-10 14:21:33.426099 | orchestrator | ok: Runtime: 0:24:00.150331 2026-01-10 14:21:33.546740 | 2026-01-10 14:21:33.547035 | TASK [Deploy services] 2026-01-10 14:21:34.110392 | orchestrator | skipping: Conditional result was False 2026-01-10 14:21:34.120486 | 2026-01-10 14:21:34.120644 | TASK [Deploy in a nutshell] 2026-01-10 14:21:34.843232 | orchestrator | + set -e 2026-01-10 14:21:34.844831 | orchestrator | 2026-01-10 14:21:34.844859 | orchestrator | # PULL IMAGES 2026-01-10 14:21:34.844867 | orchestrator | 2026-01-10 14:21:34.844879 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:21:34.844891 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:21:34.844900 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:21:34.844929 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:21:34.844960 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:21:34.844970 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 14:21:34.844976 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 14:21:34.844987 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 14:21:34.844993 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 14:21:34.845003 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 14:21:34.845008 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 14:21:34.845018 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 14:21:34.845024 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 14:21:34.845032 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 14:21:34.845039 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 14:21:34.845045 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 14:21:34.845051 | orchestrator | ++ export ARA=false 2026-01-10 14:21:34.845057 | orchestrator | ++ ARA=false 2026-01-10 14:21:34.845062 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 14:21:34.845068 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 14:21:34.845073 | orchestrator | ++ export TEMPEST=false 2026-01-10 14:21:34.845079 | orchestrator | ++ TEMPEST=false 2026-01-10 14:21:34.845084 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 14:21:34.845090 | orchestrator | ++ IS_ZUUL=true 2026-01-10 14:21:34.845095 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 14:21:34.845101 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 14:21:34.845106 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 14:21:34.845112 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 14:21:34.845119 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 14:21:34.845129 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 14:21:34.845135 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 14:21:34.845141 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 14:21:34.845146 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 14:21:34.845157 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 14:21:34.845163 | orchestrator | + echo 2026-01-10 14:21:34.845169 | orchestrator | + echo '# PULL IMAGES' 2026-01-10 14:21:34.845174 | orchestrator | + echo 2026-01-10 14:21:34.845184 | orchestrator | ++ semver latest 7.0.0 2026-01-10 14:21:34.906445 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:21:34.906523 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:21:34.906534 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-10 14:21:36.870248 | orchestrator | 2026-01-10 14:21:36 | INFO  | Trying to run play pull-images in environment custom 2026-01-10 14:21:47.141851 | orchestrator | 2026-01-10 14:21:47 | INFO  | Task 815f64e7-f840-49f7-8c80-167180e984e6 (pull-images) was prepared for execution. 2026-01-10 14:21:47.142074 | orchestrator | 2026-01-10 14:21:47 | INFO  | Task 815f64e7-f840-49f7-8c80-167180e984e6 is running in background. No more output. Check ARA for logs. 2026-01-10 14:21:49.824563 | orchestrator | 2026-01-10 14:21:49 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-10 14:21:59.962747 | orchestrator | 2026-01-10 14:21:59 | INFO  | Task 07a04da0-13a0-48e5-a862-5168a4463eaa (wipe-partitions) was prepared for execution. 2026-01-10 14:21:59.962868 | orchestrator | 2026-01-10 14:21:59 | INFO  | It takes a moment until task 07a04da0-13a0-48e5-a862-5168a4463eaa (wipe-partitions) has been started and output is visible here. 2026-01-10 14:22:13.204959 | orchestrator | 2026-01-10 14:22:13.205081 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-10 14:22:13.205099 | orchestrator | 2026-01-10 14:22:13.205111 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-10 14:22:13.205143 | orchestrator | Saturday 10 January 2026 14:22:04 +0000 (0:00:00.127) 0:00:00.127 ****** 2026-01-10 14:22:13.205157 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:13.205169 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:13.205181 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:13.205192 | orchestrator | 2026-01-10 14:22:13.205204 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-10 14:22:13.205248 | orchestrator | Saturday 10 January 2026 14:22:05 +0000 (0:00:00.602) 0:00:00.729 ****** 2026-01-10 14:22:13.205269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:13.205288 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:13.205312 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:13.205332 | orchestrator | 2026-01-10 14:22:13.205343 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-10 14:22:13.205354 | orchestrator | Saturday 10 January 2026 14:22:05 +0000 (0:00:00.411) 0:00:01.140 ****** 2026-01-10 14:22:13.205365 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:13.205376 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:13.205387 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:13.205399 | orchestrator | 2026-01-10 14:22:13.205412 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-10 14:22:13.205425 | orchestrator | Saturday 10 January 2026 14:22:06 +0000 (0:00:00.596) 0:00:01.737 ****** 2026-01-10 14:22:13.205438 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:13.205451 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:13.205463 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:13.205475 | orchestrator | 2026-01-10 14:22:13.205488 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-10 14:22:13.205500 | orchestrator | Saturday 10 January 2026 14:22:06 +0000 (0:00:00.257) 0:00:01.995 ****** 2026-01-10 14:22:13.205512 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:22:13.205529 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:22:13.205542 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:22:13.205554 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:22:13.205566 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:22:13.205578 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:22:13.205591 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:22:13.205604 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:22:13.205616 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:22:13.205628 | orchestrator | 2026-01-10 14:22:13.205640 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-10 14:22:13.205653 | orchestrator | Saturday 10 January 2026 14:22:07 +0000 (0:00:01.242) 0:00:03.237 ****** 2026-01-10 14:22:13.205665 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:22:13.205677 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:22:13.205690 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:22:13.205702 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:22:13.205714 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:22:13.205726 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:22:13.205738 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:22:13.205751 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:22:13.205763 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:22:13.205774 | orchestrator | 2026-01-10 14:22:13.205784 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-10 14:22:13.205795 | orchestrator | Saturday 10 January 2026 14:22:09 +0000 (0:00:01.641) 0:00:04.878 ****** 2026-01-10 14:22:13.205805 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:22:13.205816 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:22:13.205826 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:22:13.205837 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:22:13.205848 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:22:13.205865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:22:13.205876 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:22:13.205898 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:22:13.205946 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:22:13.205960 | orchestrator | 2026-01-10 14:22:13.205971 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-10 14:22:13.205982 | orchestrator | Saturday 10 January 2026 14:22:11 +0000 (0:00:02.104) 0:00:06.983 ****** 2026-01-10 14:22:13.205993 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:13.206003 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:13.206082 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:13.206097 | orchestrator | 2026-01-10 14:22:13.206108 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-10 14:22:13.206119 | orchestrator | Saturday 10 January 2026 14:22:12 +0000 (0:00:00.602) 0:00:07.585 ****** 2026-01-10 14:22:13.206130 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:13.206141 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:13.206152 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:13.206162 | orchestrator | 2026-01-10 14:22:13.206173 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:22:13.206186 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:13.206198 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:13.206230 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:13.206241 | orchestrator | 2026-01-10 14:22:13.206252 | orchestrator | 2026-01-10 14:22:13.206263 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:22:13.206274 | orchestrator | Saturday 10 January 2026 14:22:12 +0000 (0:00:00.613) 0:00:08.199 ****** 2026-01-10 14:22:13.206285 | orchestrator | =============================================================================== 2026-01-10 14:22:13.206295 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2026-01-10 14:22:13.206306 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.64s 2026-01-10 14:22:13.206316 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2026-01-10 14:22:13.206327 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2026-01-10 14:22:13.206337 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-01-10 14:22:13.206348 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-01-10 14:22:13.206358 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-01-10 14:22:13.206369 | orchestrator | Remove all rook related logical devices --------------------------------- 0.41s 2026-01-10 14:22:13.206380 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-01-10 14:22:25.661625 | orchestrator | 2026-01-10 14:22:25 | INFO  | Task 8e8e77b2-3436-44dd-93ed-d0c7e0d2ad8e (facts) was prepared for execution. 2026-01-10 14:22:25.661742 | orchestrator | 2026-01-10 14:22:25 | INFO  | It takes a moment until task 8e8e77b2-3436-44dd-93ed-d0c7e0d2ad8e (facts) has been started and output is visible here. 2026-01-10 14:22:39.709359 | orchestrator | 2026-01-10 14:22:39.709452 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:22:39.709462 | orchestrator | 2026-01-10 14:22:39.709469 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:22:39.709476 | orchestrator | Saturday 10 January 2026 14:22:30 +0000 (0:00:00.282) 0:00:00.282 ****** 2026-01-10 14:22:39.709482 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:39.709490 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:39.709496 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:39.709524 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:39.709531 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:39.709537 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:39.709543 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:39.709549 | orchestrator | 2026-01-10 14:22:39.709558 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:22:39.709564 | orchestrator | Saturday 10 January 2026 14:22:31 +0000 (0:00:01.189) 0:00:01.472 ****** 2026-01-10 14:22:39.709571 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:22:39.709577 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:39.709584 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:39.709590 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:39.709596 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:39.709602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:39.709608 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:39.709614 | orchestrator | 2026-01-10 14:22:39.709621 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:22:39.709627 | orchestrator | 2026-01-10 14:22:39.709633 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:22:39.709639 | orchestrator | Saturday 10 January 2026 14:22:32 +0000 (0:00:01.435) 0:00:02.907 ****** 2026-01-10 14:22:39.709645 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:39.709651 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:39.709658 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:39.709664 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:39.709670 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:39.709677 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:39.709683 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:39.709689 | orchestrator | 2026-01-10 14:22:39.709695 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:22:39.709701 | orchestrator | 2026-01-10 14:22:39.709707 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:22:39.709725 | orchestrator | Saturday 10 January 2026 14:22:38 +0000 (0:00:05.950) 0:00:08.858 ****** 2026-01-10 14:22:39.709732 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:22:39.709738 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:39.709744 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:39.709750 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:39.709756 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:39.709762 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:39.709768 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:39.709774 | orchestrator | 2026-01-10 14:22:39.709781 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:22:39.709787 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709795 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709801 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709807 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709814 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709820 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709826 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:22:39.709832 | orchestrator | 2026-01-10 14:22:39.709847 | orchestrator | 2026-01-10 14:22:39.709857 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:22:39.709867 | orchestrator | Saturday 10 January 2026 14:22:39 +0000 (0:00:00.558) 0:00:09.416 ****** 2026-01-10 14:22:39.709876 | orchestrator | =============================================================================== 2026-01-10 14:22:39.709886 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.95s 2026-01-10 14:22:39.709921 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.44s 2026-01-10 14:22:39.709933 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-01-10 14:22:39.709940 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-10 14:22:42.143740 | orchestrator | 2026-01-10 14:22:42 | INFO  | Task 7044ebc3-2278-4e4c-ba33-eeb7e191d000 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-10 14:22:42.143829 | orchestrator | 2026-01-10 14:22:42 | INFO  | It takes a moment until task 7044ebc3-2278-4e4c-ba33-eeb7e191d000 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-10 14:22:53.950916 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:22:53.951041 | orchestrator | 2.16.14 2026-01-10 14:22:53.951060 | orchestrator | 2026-01-10 14:22:53.951074 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:22:53.951087 | orchestrator | 2026-01-10 14:22:53.951128 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:22:53.951141 | orchestrator | Saturday 10 January 2026 14:22:46 +0000 (0:00:00.331) 0:00:00.331 ****** 2026-01-10 14:22:53.951152 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:22:53.951164 | orchestrator | 2026-01-10 14:22:53.951175 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:22:53.951186 | orchestrator | Saturday 10 January 2026 14:22:46 +0000 (0:00:00.262) 0:00:00.594 ****** 2026-01-10 14:22:53.951197 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:53.951208 | orchestrator | 2026-01-10 14:22:53.951219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951230 | orchestrator | Saturday 10 January 2026 14:22:47 +0000 (0:00:00.207) 0:00:00.801 ****** 2026-01-10 14:22:53.951241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:22:53.951252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:22:53.951263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:22:53.951274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:22:53.951284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:22:53.951295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:22:53.951306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:22:53.951316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:22:53.951327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:22:53.951338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:22:53.951358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:22:53.951369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:22:53.951380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:22:53.951391 | orchestrator | 2026-01-10 14:22:53.951401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951435 | orchestrator | Saturday 10 January 2026 14:22:47 +0000 (0:00:00.488) 0:00:01.290 ****** 2026-01-10 14:22:53.951447 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951458 | orchestrator | 2026-01-10 14:22:53.951468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951479 | orchestrator | Saturday 10 January 2026 14:22:47 +0000 (0:00:00.200) 0:00:01.491 ****** 2026-01-10 14:22:53.951490 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951501 | orchestrator | 2026-01-10 14:22:53.951511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951522 | orchestrator | Saturday 10 January 2026 14:22:47 +0000 (0:00:00.204) 0:00:01.696 ****** 2026-01-10 14:22:53.951532 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951543 | orchestrator | 2026-01-10 14:22:53.951554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951570 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.193) 0:00:01.889 ****** 2026-01-10 14:22:53.951581 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951592 | orchestrator | 2026-01-10 14:22:53.951603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951613 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.191) 0:00:02.080 ****** 2026-01-10 14:22:53.951624 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951635 | orchestrator | 2026-01-10 14:22:53.951646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951657 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.206) 0:00:02.287 ****** 2026-01-10 14:22:53.951667 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951678 | orchestrator | 2026-01-10 14:22:53.951688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951699 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.218) 0:00:02.505 ****** 2026-01-10 14:22:53.951709 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951720 | orchestrator | 2026-01-10 14:22:53.951731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951742 | orchestrator | Saturday 10 January 2026 14:22:48 +0000 (0:00:00.209) 0:00:02.715 ****** 2026-01-10 14:22:53.951752 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.951763 | orchestrator | 2026-01-10 14:22:53.951774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951784 | orchestrator | Saturday 10 January 2026 14:22:49 +0000 (0:00:00.196) 0:00:02.911 ****** 2026-01-10 14:22:53.951795 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196) 2026-01-10 14:22:53.951807 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196) 2026-01-10 14:22:53.951817 | orchestrator | 2026-01-10 14:22:53.951828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951860 | orchestrator | Saturday 10 January 2026 14:22:49 +0000 (0:00:00.440) 0:00:03.352 ****** 2026-01-10 14:22:53.951871 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe) 2026-01-10 14:22:53.951913 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe) 2026-01-10 14:22:53.951925 | orchestrator | 2026-01-10 14:22:53.951936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.951947 | orchestrator | Saturday 10 January 2026 14:22:50 +0000 (0:00:00.613) 0:00:03.965 ****** 2026-01-10 14:22:53.951958 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341) 2026-01-10 14:22:53.951968 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341) 2026-01-10 14:22:53.951979 | orchestrator | 2026-01-10 14:22:53.951989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.952009 | orchestrator | Saturday 10 January 2026 14:22:50 +0000 (0:00:00.685) 0:00:04.651 ****** 2026-01-10 14:22:53.952020 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a) 2026-01-10 14:22:53.952031 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a) 2026-01-10 14:22:53.952041 | orchestrator | 2026-01-10 14:22:53.952052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:22:53.952062 | orchestrator | Saturday 10 January 2026 14:22:51 +0000 (0:00:00.948) 0:00:05.599 ****** 2026-01-10 14:22:53.952073 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:22:53.952084 | orchestrator | 2026-01-10 14:22:53.952101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952112 | orchestrator | Saturday 10 January 2026 14:22:52 +0000 (0:00:00.332) 0:00:05.931 ****** 2026-01-10 14:22:53.952123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:22:53.952133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:22:53.952144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:22:53.952155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:22:53.952165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:22:53.952176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:22:53.952186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:22:53.952197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:22:53.952207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:22:53.952218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:22:53.952228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:22:53.952239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:22:53.952249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:22:53.952260 | orchestrator | 2026-01-10 14:22:53.952271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952281 | orchestrator | Saturday 10 January 2026 14:22:52 +0000 (0:00:00.379) 0:00:06.311 ****** 2026-01-10 14:22:53.952292 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952303 | orchestrator | 2026-01-10 14:22:53.952314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952324 | orchestrator | Saturday 10 January 2026 14:22:52 +0000 (0:00:00.199) 0:00:06.510 ****** 2026-01-10 14:22:53.952335 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952346 | orchestrator | 2026-01-10 14:22:53.952356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952367 | orchestrator | Saturday 10 January 2026 14:22:52 +0000 (0:00:00.198) 0:00:06.708 ****** 2026-01-10 14:22:53.952377 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952388 | orchestrator | 2026-01-10 14:22:53.952398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952409 | orchestrator | Saturday 10 January 2026 14:22:53 +0000 (0:00:00.205) 0:00:06.914 ****** 2026-01-10 14:22:53.952420 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952431 | orchestrator | 2026-01-10 14:22:53.952441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952452 | orchestrator | Saturday 10 January 2026 14:22:53 +0000 (0:00:00.200) 0:00:07.114 ****** 2026-01-10 14:22:53.952469 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952480 | orchestrator | 2026-01-10 14:22:53.952490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952501 | orchestrator | Saturday 10 January 2026 14:22:53 +0000 (0:00:00.198) 0:00:07.312 ****** 2026-01-10 14:22:53.952511 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952522 | orchestrator | 2026-01-10 14:22:53.952532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:22:53.952543 | orchestrator | Saturday 10 January 2026 14:22:53 +0000 (0:00:00.199) 0:00:07.512 ****** 2026-01-10 14:22:53.952554 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:53.952564 | orchestrator | 2026-01-10 14:22:53.952581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570169 | orchestrator | Saturday 10 January 2026 14:22:53 +0000 (0:00:00.195) 0:00:07.707 ****** 2026-01-10 14:23:01.570282 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570300 | orchestrator | 2026-01-10 14:23:01.570313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570325 | orchestrator | Saturday 10 January 2026 14:22:54 +0000 (0:00:00.206) 0:00:07.914 ****** 2026-01-10 14:23:01.570336 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:23:01.570348 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:23:01.570359 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:23:01.570370 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:23:01.570381 | orchestrator | 2026-01-10 14:23:01.570392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570404 | orchestrator | Saturday 10 January 2026 14:22:55 +0000 (0:00:01.050) 0:00:08.964 ****** 2026-01-10 14:23:01.570414 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570425 | orchestrator | 2026-01-10 14:23:01.570436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570447 | orchestrator | Saturday 10 January 2026 14:22:55 +0000 (0:00:00.199) 0:00:09.164 ****** 2026-01-10 14:23:01.570458 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570469 | orchestrator | 2026-01-10 14:23:01.570480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570491 | orchestrator | Saturday 10 January 2026 14:22:55 +0000 (0:00:00.202) 0:00:09.366 ****** 2026-01-10 14:23:01.570502 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570512 | orchestrator | 2026-01-10 14:23:01.570523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:01.570534 | orchestrator | Saturday 10 January 2026 14:22:55 +0000 (0:00:00.200) 0:00:09.567 ****** 2026-01-10 14:23:01.570545 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570556 | orchestrator | 2026-01-10 14:23:01.570566 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:23:01.570577 | orchestrator | Saturday 10 January 2026 14:22:55 +0000 (0:00:00.195) 0:00:09.762 ****** 2026-01-10 14:23:01.570588 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:23:01.570599 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:23:01.570609 | orchestrator | 2026-01-10 14:23:01.570639 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:23:01.570653 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.171) 0:00:09.933 ****** 2026-01-10 14:23:01.570665 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570678 | orchestrator | 2026-01-10 14:23:01.570690 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:23:01.570703 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.137) 0:00:10.071 ****** 2026-01-10 14:23:01.570715 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570727 | orchestrator | 2026-01-10 14:23:01.570740 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:23:01.570779 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.132) 0:00:10.204 ****** 2026-01-10 14:23:01.570792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.570805 | orchestrator | 2026-01-10 14:23:01.570818 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:23:01.570830 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.142) 0:00:10.347 ****** 2026-01-10 14:23:01.570843 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:01.570856 | orchestrator | 2026-01-10 14:23:01.570868 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:23:01.570910 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.138) 0:00:10.485 ****** 2026-01-10 14:23:01.570924 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}}) 2026-01-10 14:23:01.570937 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb55798-e032-5872-951c-62472db4891e'}}) 2026-01-10 14:23:01.570950 | orchestrator | 2026-01-10 14:23:01.570963 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:23:01.570974 | orchestrator | Saturday 10 January 2026 14:22:56 +0000 (0:00:00.170) 0:00:10.655 ****** 2026-01-10 14:23:01.570986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}})  2026-01-10 14:23:01.571004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb55798-e032-5872-951c-62472db4891e'}})  2026-01-10 14:23:01.571015 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571026 | orchestrator | 2026-01-10 14:23:01.571036 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:23:01.571047 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.154) 0:00:10.810 ****** 2026-01-10 14:23:01.571058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}})  2026-01-10 14:23:01.571069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb55798-e032-5872-951c-62472db4891e'}})  2026-01-10 14:23:01.571080 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571091 | orchestrator | 2026-01-10 14:23:01.571101 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:23:01.571112 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.356) 0:00:11.167 ****** 2026-01-10 14:23:01.571123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}})  2026-01-10 14:23:01.571152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb55798-e032-5872-951c-62472db4891e'}})  2026-01-10 14:23:01.571164 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571175 | orchestrator | 2026-01-10 14:23:01.571186 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:23:01.571203 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.152) 0:00:11.319 ****** 2026-01-10 14:23:01.571214 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:01.571225 | orchestrator | 2026-01-10 14:23:01.571235 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:23:01.571246 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.142) 0:00:11.461 ****** 2026-01-10 14:23:01.571257 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:23:01.571268 | orchestrator | 2026-01-10 14:23:01.571278 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:23:01.571289 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.136) 0:00:11.598 ****** 2026-01-10 14:23:01.571300 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571311 | orchestrator | 2026-01-10 14:23:01.571321 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:23:01.571332 | orchestrator | Saturday 10 January 2026 14:22:57 +0000 (0:00:00.138) 0:00:11.736 ****** 2026-01-10 14:23:01.571351 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571362 | orchestrator | 2026-01-10 14:23:01.571373 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:23:01.571384 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.145) 0:00:11.882 ****** 2026-01-10 14:23:01.571395 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571405 | orchestrator | 2026-01-10 14:23:01.571416 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:23:01.571427 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.137) 0:00:12.020 ****** 2026-01-10 14:23:01.571437 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:23:01.571448 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:01.571459 | orchestrator |  "sdb": { 2026-01-10 14:23:01.571471 | orchestrator |  "osd_lvm_uuid": "2f4cdd2b-88b0-5432-8a57-fbfff03caf8e" 2026-01-10 14:23:01.571482 | orchestrator |  }, 2026-01-10 14:23:01.571492 | orchestrator |  "sdc": { 2026-01-10 14:23:01.571503 | orchestrator |  "osd_lvm_uuid": "aeb55798-e032-5872-951c-62472db4891e" 2026-01-10 14:23:01.571514 | orchestrator |  } 2026-01-10 14:23:01.571524 | orchestrator |  } 2026-01-10 14:23:01.571535 | orchestrator | } 2026-01-10 14:23:01.571546 | orchestrator | 2026-01-10 14:23:01.571557 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:23:01.571568 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.141) 0:00:12.162 ****** 2026-01-10 14:23:01.571578 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571589 | orchestrator | 2026-01-10 14:23:01.571599 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:23:01.571610 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.127) 0:00:12.290 ****** 2026-01-10 14:23:01.571621 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571631 | orchestrator | 2026-01-10 14:23:01.571642 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:23:01.571653 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.136) 0:00:12.426 ****** 2026-01-10 14:23:01.571663 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:23:01.571674 | orchestrator | 2026-01-10 14:23:01.571685 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:23:01.571695 | orchestrator | Saturday 10 January 2026 14:22:58 +0000 (0:00:00.120) 0:00:12.547 ****** 2026-01-10 14:23:01.571706 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:23:01.571717 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:23:01.571731 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:01.571749 | orchestrator |  "sdb": { 2026-01-10 14:23:01.571766 | orchestrator |  "osd_lvm_uuid": "2f4cdd2b-88b0-5432-8a57-fbfff03caf8e" 2026-01-10 14:23:01.571784 | orchestrator |  }, 2026-01-10 14:23:01.571802 | orchestrator |  "sdc": { 2026-01-10 14:23:01.571820 | orchestrator |  "osd_lvm_uuid": "aeb55798-e032-5872-951c-62472db4891e" 2026-01-10 14:23:01.571836 | orchestrator |  } 2026-01-10 14:23:01.571853 | orchestrator |  }, 2026-01-10 14:23:01.571867 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:23:01.571908 | orchestrator |  { 2026-01-10 14:23:01.571927 | orchestrator |  "data": "osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e", 2026-01-10 14:23:01.571942 | orchestrator |  "data_vg": "ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e" 2026-01-10 14:23:01.571953 | orchestrator |  }, 2026-01-10 14:23:01.571964 | orchestrator |  { 2026-01-10 14:23:01.571975 | orchestrator |  "data": "osd-block-aeb55798-e032-5872-951c-62472db4891e", 2026-01-10 14:23:01.571985 | orchestrator |  "data_vg": "ceph-aeb55798-e032-5872-951c-62472db4891e" 2026-01-10 14:23:01.572003 | orchestrator |  } 2026-01-10 14:23:01.572017 | orchestrator |  ] 2026-01-10 14:23:01.572037 | orchestrator |  } 2026-01-10 14:23:01.572066 | orchestrator | } 2026-01-10 14:23:01.572083 | orchestrator | 2026-01-10 14:23:01.572101 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:23:01.572117 | orchestrator | Saturday 10 January 2026 14:22:59 +0000 (0:00:00.423) 0:00:12.970 ****** 2026-01-10 14:23:01.572135 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:23:01.572152 | orchestrator | 2026-01-10 14:23:01.572169 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:23:01.572187 | orchestrator | 2026-01-10 14:23:01.572202 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:23:01.572213 | orchestrator | Saturday 10 January 2026 14:23:01 +0000 (0:00:01.852) 0:00:14.823 ****** 2026-01-10 14:23:01.572223 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:23:01.572234 | orchestrator | 2026-01-10 14:23:01.572244 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:23:01.572255 | orchestrator | Saturday 10 January 2026 14:23:01 +0000 (0:00:00.251) 0:00:15.075 ****** 2026-01-10 14:23:01.572266 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:01.572276 | orchestrator | 2026-01-10 14:23:01.572297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056030 | orchestrator | Saturday 10 January 2026 14:23:01 +0000 (0:00:00.256) 0:00:15.332 ****** 2026-01-10 14:23:09.056161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:23:09.056191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:23:09.056206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:23:09.056218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:23:09.056229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:23:09.056240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:23:09.056251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:23:09.056262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:23:09.056273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:23:09.056284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:23:09.056295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:23:09.056311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:23:09.056323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:23:09.056335 | orchestrator | 2026-01-10 14:23:09.056347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056358 | orchestrator | Saturday 10 January 2026 14:23:02 +0000 (0:00:00.485) 0:00:15.817 ****** 2026-01-10 14:23:09.056370 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056382 | orchestrator | 2026-01-10 14:23:09.056392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056403 | orchestrator | Saturday 10 January 2026 14:23:02 +0000 (0:00:00.207) 0:00:16.025 ****** 2026-01-10 14:23:09.056414 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056425 | orchestrator | 2026-01-10 14:23:09.056436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056447 | orchestrator | Saturday 10 January 2026 14:23:02 +0000 (0:00:00.212) 0:00:16.238 ****** 2026-01-10 14:23:09.056459 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056470 | orchestrator | 2026-01-10 14:23:09.056481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056520 | orchestrator | Saturday 10 January 2026 14:23:02 +0000 (0:00:00.199) 0:00:16.437 ****** 2026-01-10 14:23:09.056535 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056547 | orchestrator | 2026-01-10 14:23:09.056560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056573 | orchestrator | Saturday 10 January 2026 14:23:02 +0000 (0:00:00.272) 0:00:16.710 ****** 2026-01-10 14:23:09.056586 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056598 | orchestrator | 2026-01-10 14:23:09.056610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056621 | orchestrator | Saturday 10 January 2026 14:23:03 +0000 (0:00:00.670) 0:00:17.380 ****** 2026-01-10 14:23:09.056632 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056643 | orchestrator | 2026-01-10 14:23:09.056672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056684 | orchestrator | Saturday 10 January 2026 14:23:03 +0000 (0:00:00.206) 0:00:17.586 ****** 2026-01-10 14:23:09.056695 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056705 | orchestrator | 2026-01-10 14:23:09.056716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056727 | orchestrator | Saturday 10 January 2026 14:23:04 +0000 (0:00:00.224) 0:00:17.810 ****** 2026-01-10 14:23:09.056738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.056749 | orchestrator | 2026-01-10 14:23:09.056760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056771 | orchestrator | Saturday 10 January 2026 14:23:04 +0000 (0:00:00.210) 0:00:18.020 ****** 2026-01-10 14:23:09.056781 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e) 2026-01-10 14:23:09.056794 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e) 2026-01-10 14:23:09.056805 | orchestrator | 2026-01-10 14:23:09.056816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056826 | orchestrator | Saturday 10 January 2026 14:23:04 +0000 (0:00:00.432) 0:00:18.453 ****** 2026-01-10 14:23:09.056837 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e) 2026-01-10 14:23:09.056848 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e) 2026-01-10 14:23:09.056859 | orchestrator | 2026-01-10 14:23:09.056894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056906 | orchestrator | Saturday 10 January 2026 14:23:05 +0000 (0:00:00.431) 0:00:18.884 ****** 2026-01-10 14:23:09.056916 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2) 2026-01-10 14:23:09.056927 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2) 2026-01-10 14:23:09.056938 | orchestrator | 2026-01-10 14:23:09.056949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.056980 | orchestrator | Saturday 10 January 2026 14:23:05 +0000 (0:00:00.432) 0:00:19.317 ****** 2026-01-10 14:23:09.056992 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c) 2026-01-10 14:23:09.057003 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c) 2026-01-10 14:23:09.057014 | orchestrator | 2026-01-10 14:23:09.057025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:09.057036 | orchestrator | Saturday 10 January 2026 14:23:05 +0000 (0:00:00.422) 0:00:19.740 ****** 2026-01-10 14:23:09.057047 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:23:09.057058 | orchestrator | 2026-01-10 14:23:09.057068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057079 | orchestrator | Saturday 10 January 2026 14:23:06 +0000 (0:00:00.258) 0:00:19.999 ****** 2026-01-10 14:23:09.057099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:23:09.057109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:23:09.057120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:23:09.057131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:23:09.057141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:23:09.057152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:23:09.057163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:23:09.057173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:23:09.057184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:23:09.057194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:23:09.057205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:23:09.057216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:23:09.057226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:23:09.057237 | orchestrator | 2026-01-10 14:23:09.057248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057259 | orchestrator | Saturday 10 January 2026 14:23:06 +0000 (0:00:00.275) 0:00:20.275 ****** 2026-01-10 14:23:09.057269 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057280 | orchestrator | 2026-01-10 14:23:09.057291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057310 | orchestrator | Saturday 10 January 2026 14:23:06 +0000 (0:00:00.438) 0:00:20.713 ****** 2026-01-10 14:23:09.057321 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057332 | orchestrator | 2026-01-10 14:23:09.057342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057353 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.147) 0:00:20.860 ****** 2026-01-10 14:23:09.057364 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057375 | orchestrator | 2026-01-10 14:23:09.057386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057396 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.163) 0:00:21.024 ****** 2026-01-10 14:23:09.057407 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057418 | orchestrator | 2026-01-10 14:23:09.057429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057440 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.172) 0:00:21.196 ****** 2026-01-10 14:23:09.057451 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057461 | orchestrator | 2026-01-10 14:23:09.057472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057483 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.163) 0:00:21.359 ****** 2026-01-10 14:23:09.057494 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057504 | orchestrator | 2026-01-10 14:23:09.057515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057526 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.178) 0:00:21.538 ****** 2026-01-10 14:23:09.057537 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057548 | orchestrator | 2026-01-10 14:23:09.057558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057569 | orchestrator | Saturday 10 January 2026 14:23:07 +0000 (0:00:00.174) 0:00:21.712 ****** 2026-01-10 14:23:09.057586 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:09.057597 | orchestrator | 2026-01-10 14:23:09.057608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057619 | orchestrator | Saturday 10 January 2026 14:23:08 +0000 (0:00:00.172) 0:00:21.885 ****** 2026-01-10 14:23:09.057629 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:23:09.057641 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:23:09.057652 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:23:09.057663 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:23:09.057674 | orchestrator | 2026-01-10 14:23:09.057685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:09.057696 | orchestrator | Saturday 10 January 2026 14:23:08 +0000 (0:00:00.761) 0:00:22.646 ****** 2026-01-10 14:23:09.057706 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498775 | orchestrator | 2026-01-10 14:23:14.498852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:14.498860 | orchestrator | Saturday 10 January 2026 14:23:09 +0000 (0:00:00.174) 0:00:22.821 ****** 2026-01-10 14:23:14.498889 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498895 | orchestrator | 2026-01-10 14:23:14.498899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:14.498903 | orchestrator | Saturday 10 January 2026 14:23:09 +0000 (0:00:00.181) 0:00:23.003 ****** 2026-01-10 14:23:14.498907 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498911 | orchestrator | 2026-01-10 14:23:14.498915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:14.498919 | orchestrator | Saturday 10 January 2026 14:23:09 +0000 (0:00:00.175) 0:00:23.179 ****** 2026-01-10 14:23:14.498923 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498927 | orchestrator | 2026-01-10 14:23:14.498931 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:23:14.498934 | orchestrator | Saturday 10 January 2026 14:23:09 +0000 (0:00:00.572) 0:00:23.752 ****** 2026-01-10 14:23:14.498938 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:23:14.498942 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:23:14.498946 | orchestrator | 2026-01-10 14:23:14.498950 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:23:14.498954 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.160) 0:00:23.912 ****** 2026-01-10 14:23:14.498957 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498962 | orchestrator | 2026-01-10 14:23:14.498965 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:23:14.498969 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.132) 0:00:24.045 ****** 2026-01-10 14:23:14.498973 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498977 | orchestrator | 2026-01-10 14:23:14.498980 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:23:14.498984 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.118) 0:00:24.163 ****** 2026-01-10 14:23:14.498988 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.498992 | orchestrator | 2026-01-10 14:23:14.498995 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:23:14.498999 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.114) 0:00:24.277 ****** 2026-01-10 14:23:14.499003 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:14.499007 | orchestrator | 2026-01-10 14:23:14.499011 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:23:14.499015 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.121) 0:00:24.398 ****** 2026-01-10 14:23:14.499020 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '381f50a6-56c2-5a32-835b-1a08246466ad'}}) 2026-01-10 14:23:14.499024 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a6c1f07-f96f-5f9c-9404-64a84774a9be'}}) 2026-01-10 14:23:14.499044 | orchestrator | 2026-01-10 14:23:14.499048 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:23:14.499052 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.149) 0:00:24.547 ****** 2026-01-10 14:23:14.499056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '381f50a6-56c2-5a32-835b-1a08246466ad'}})  2026-01-10 14:23:14.499071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a6c1f07-f96f-5f9c-9404-64a84774a9be'}})  2026-01-10 14:23:14.499075 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499079 | orchestrator | 2026-01-10 14:23:14.499083 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:23:14.499086 | orchestrator | Saturday 10 January 2026 14:23:10 +0000 (0:00:00.114) 0:00:24.662 ****** 2026-01-10 14:23:14.499090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '381f50a6-56c2-5a32-835b-1a08246466ad'}})  2026-01-10 14:23:14.499094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a6c1f07-f96f-5f9c-9404-64a84774a9be'}})  2026-01-10 14:23:14.499097 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499101 | orchestrator | 2026-01-10 14:23:14.499105 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:23:14.499108 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.130) 0:00:24.793 ****** 2026-01-10 14:23:14.499113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '381f50a6-56c2-5a32-835b-1a08246466ad'}})  2026-01-10 14:23:14.499117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a6c1f07-f96f-5f9c-9404-64a84774a9be'}})  2026-01-10 14:23:14.499122 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499128 | orchestrator | 2026-01-10 14:23:14.499134 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:23:14.499140 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.138) 0:00:24.931 ****** 2026-01-10 14:23:14.499148 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:14.499155 | orchestrator | 2026-01-10 14:23:14.499160 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:23:14.499166 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.123) 0:00:25.054 ****** 2026-01-10 14:23:14.499172 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:23:14.499178 | orchestrator | 2026-01-10 14:23:14.499184 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:23:14.499191 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.114) 0:00:25.169 ****** 2026-01-10 14:23:14.499210 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499216 | orchestrator | 2026-01-10 14:23:14.499222 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:23:14.499228 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.248) 0:00:25.418 ****** 2026-01-10 14:23:14.499234 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499240 | orchestrator | 2026-01-10 14:23:14.499245 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:23:14.499251 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.115) 0:00:25.534 ****** 2026-01-10 14:23:14.499257 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499263 | orchestrator | 2026-01-10 14:23:14.499269 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:23:14.499275 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.094) 0:00:25.629 ****** 2026-01-10 14:23:14.499282 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:23:14.499287 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:14.499294 | orchestrator |  "sdb": { 2026-01-10 14:23:14.499300 | orchestrator |  "osd_lvm_uuid": "381f50a6-56c2-5a32-835b-1a08246466ad" 2026-01-10 14:23:14.499313 | orchestrator |  }, 2026-01-10 14:23:14.499320 | orchestrator |  "sdc": { 2026-01-10 14:23:14.499326 | orchestrator |  "osd_lvm_uuid": "5a6c1f07-f96f-5f9c-9404-64a84774a9be" 2026-01-10 14:23:14.499332 | orchestrator |  } 2026-01-10 14:23:14.499338 | orchestrator |  } 2026-01-10 14:23:14.499345 | orchestrator | } 2026-01-10 14:23:14.499351 | orchestrator | 2026-01-10 14:23:14.499359 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:23:14.499365 | orchestrator | Saturday 10 January 2026 14:23:11 +0000 (0:00:00.133) 0:00:25.762 ****** 2026-01-10 14:23:14.499372 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499378 | orchestrator | 2026-01-10 14:23:14.499385 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:23:14.499391 | orchestrator | Saturday 10 January 2026 14:23:12 +0000 (0:00:00.124) 0:00:25.887 ****** 2026-01-10 14:23:14.499397 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499402 | orchestrator | 2026-01-10 14:23:14.499409 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:23:14.499416 | orchestrator | Saturday 10 January 2026 14:23:12 +0000 (0:00:00.110) 0:00:25.998 ****** 2026-01-10 14:23:14.499422 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:23:14.499429 | orchestrator | 2026-01-10 14:23:14.499435 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:23:14.499442 | orchestrator | Saturday 10 January 2026 14:23:12 +0000 (0:00:00.108) 0:00:26.106 ****** 2026-01-10 14:23:14.499448 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:23:14.499456 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:23:14.499461 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:14.499465 | orchestrator |  "sdb": { 2026-01-10 14:23:14.499470 | orchestrator |  "osd_lvm_uuid": "381f50a6-56c2-5a32-835b-1a08246466ad" 2026-01-10 14:23:14.499474 | orchestrator |  }, 2026-01-10 14:23:14.499479 | orchestrator |  "sdc": { 2026-01-10 14:23:14.499483 | orchestrator |  "osd_lvm_uuid": "5a6c1f07-f96f-5f9c-9404-64a84774a9be" 2026-01-10 14:23:14.499487 | orchestrator |  } 2026-01-10 14:23:14.499492 | orchestrator |  }, 2026-01-10 14:23:14.499495 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:23:14.499499 | orchestrator |  { 2026-01-10 14:23:14.499503 | orchestrator |  "data": "osd-block-381f50a6-56c2-5a32-835b-1a08246466ad", 2026-01-10 14:23:14.499507 | orchestrator |  "data_vg": "ceph-381f50a6-56c2-5a32-835b-1a08246466ad" 2026-01-10 14:23:14.499511 | orchestrator |  }, 2026-01-10 14:23:14.499514 | orchestrator |  { 2026-01-10 14:23:14.499518 | orchestrator |  "data": "osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be", 2026-01-10 14:23:14.499522 | orchestrator |  "data_vg": "ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be" 2026-01-10 14:23:14.499525 | orchestrator |  } 2026-01-10 14:23:14.499529 | orchestrator |  ] 2026-01-10 14:23:14.499533 | orchestrator |  } 2026-01-10 14:23:14.499537 | orchestrator | } 2026-01-10 14:23:14.499541 | orchestrator | 2026-01-10 14:23:14.499544 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:23:14.499548 | orchestrator | Saturday 10 January 2026 14:23:12 +0000 (0:00:00.200) 0:00:26.307 ****** 2026-01-10 14:23:14.499552 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:23:14.499556 | orchestrator | 2026-01-10 14:23:14.499559 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:23:14.499563 | orchestrator | 2026-01-10 14:23:14.499567 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:23:14.499570 | orchestrator | Saturday 10 January 2026 14:23:13 +0000 (0:00:00.923) 0:00:27.230 ****** 2026-01-10 14:23:14.499574 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:23:14.499578 | orchestrator | 2026-01-10 14:23:14.499582 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:23:14.499593 | orchestrator | Saturday 10 January 2026 14:23:13 +0000 (0:00:00.516) 0:00:27.747 ****** 2026-01-10 14:23:14.499597 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:14.499601 | orchestrator | 2026-01-10 14:23:14.499605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:14.499609 | orchestrator | Saturday 10 January 2026 14:23:14 +0000 (0:00:00.206) 0:00:27.954 ****** 2026-01-10 14:23:14.499613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:23:14.499616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:23:14.499620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:23:14.499624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:23:14.499627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:23:14.499637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:23:22.215406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:23:22.215546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:23:22.215576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:23:22.215595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:23:22.215614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:23:22.215633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:23:22.215652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:23:22.215672 | orchestrator | 2026-01-10 14:23:22.215692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.215713 | orchestrator | Saturday 10 January 2026 14:23:14 +0000 (0:00:00.301) 0:00:28.255 ****** 2026-01-10 14:23:22.215732 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.215751 | orchestrator | 2026-01-10 14:23:22.215769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.215787 | orchestrator | Saturday 10 January 2026 14:23:14 +0000 (0:00:00.169) 0:00:28.425 ****** 2026-01-10 14:23:22.215806 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.215824 | orchestrator | 2026-01-10 14:23:22.215844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.215908 | orchestrator | Saturday 10 January 2026 14:23:14 +0000 (0:00:00.179) 0:00:28.605 ****** 2026-01-10 14:23:22.215929 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.215949 | orchestrator | 2026-01-10 14:23:22.215968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.215987 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.173) 0:00:28.778 ****** 2026-01-10 14:23:22.216007 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.216025 | orchestrator | 2026-01-10 14:23:22.216044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216061 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.179) 0:00:28.958 ****** 2026-01-10 14:23:22.216079 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.216097 | orchestrator | 2026-01-10 14:23:22.216116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216136 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.185) 0:00:29.143 ****** 2026-01-10 14:23:22.216155 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.216174 | orchestrator | 2026-01-10 14:23:22.216194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216246 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.175) 0:00:29.318 ****** 2026-01-10 14:23:22.216268 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.216286 | orchestrator | 2026-01-10 14:23:22.216303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216318 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.178) 0:00:29.497 ****** 2026-01-10 14:23:22.216336 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.216352 | orchestrator | 2026-01-10 14:23:22.216369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216387 | orchestrator | Saturday 10 January 2026 14:23:15 +0000 (0:00:00.189) 0:00:29.687 ****** 2026-01-10 14:23:22.216404 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1) 2026-01-10 14:23:22.216423 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1) 2026-01-10 14:23:22.216442 | orchestrator | 2026-01-10 14:23:22.216460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216479 | orchestrator | Saturday 10 January 2026 14:23:16 +0000 (0:00:00.651) 0:00:30.338 ****** 2026-01-10 14:23:22.216498 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc) 2026-01-10 14:23:22.216516 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc) 2026-01-10 14:23:22.216534 | orchestrator | 2026-01-10 14:23:22.216551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216569 | orchestrator | Saturday 10 January 2026 14:23:16 +0000 (0:00:00.371) 0:00:30.709 ****** 2026-01-10 14:23:22.216587 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076) 2026-01-10 14:23:22.216606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076) 2026-01-10 14:23:22.216625 | orchestrator | 2026-01-10 14:23:22.216643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216660 | orchestrator | Saturday 10 January 2026 14:23:17 +0000 (0:00:00.398) 0:00:31.107 ****** 2026-01-10 14:23:22.216678 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6) 2026-01-10 14:23:22.216696 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6) 2026-01-10 14:23:22.216715 | orchestrator | 2026-01-10 14:23:22.216734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:23:22.216751 | orchestrator | Saturday 10 January 2026 14:23:17 +0000 (0:00:00.415) 0:00:31.523 ****** 2026-01-10 14:23:22.216769 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:23:22.216787 | orchestrator | 2026-01-10 14:23:22.216806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.216849 | orchestrator | Saturday 10 January 2026 14:23:18 +0000 (0:00:00.334) 0:00:31.857 ****** 2026-01-10 14:23:22.216895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:23:22.216915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:23:22.216934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:23:22.216952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:23:22.216970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:23:22.217011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:23:22.217033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:23:22.217052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:23:22.217087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:23:22.217106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:23:22.217125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:23:22.217143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:23:22.217162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:23:22.217180 | orchestrator | 2026-01-10 14:23:22.217199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217218 | orchestrator | Saturday 10 January 2026 14:23:18 +0000 (0:00:00.392) 0:00:32.249 ****** 2026-01-10 14:23:22.217237 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217255 | orchestrator | 2026-01-10 14:23:22.217274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217292 | orchestrator | Saturday 10 January 2026 14:23:18 +0000 (0:00:00.206) 0:00:32.455 ****** 2026-01-10 14:23:22.217311 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217329 | orchestrator | 2026-01-10 14:23:22.217348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217374 | orchestrator | Saturday 10 January 2026 14:23:18 +0000 (0:00:00.220) 0:00:32.676 ****** 2026-01-10 14:23:22.217394 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217411 | orchestrator | 2026-01-10 14:23:22.217429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217447 | orchestrator | Saturday 10 January 2026 14:23:19 +0000 (0:00:00.208) 0:00:32.885 ****** 2026-01-10 14:23:22.217466 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217485 | orchestrator | 2026-01-10 14:23:22.217503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217521 | orchestrator | Saturday 10 January 2026 14:23:19 +0000 (0:00:00.221) 0:00:33.107 ****** 2026-01-10 14:23:22.217538 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217557 | orchestrator | 2026-01-10 14:23:22.217575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217594 | orchestrator | Saturday 10 January 2026 14:23:19 +0000 (0:00:00.374) 0:00:33.482 ****** 2026-01-10 14:23:22.217612 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217629 | orchestrator | 2026-01-10 14:23:22.217647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217665 | orchestrator | Saturday 10 January 2026 14:23:20 +0000 (0:00:00.710) 0:00:34.192 ****** 2026-01-10 14:23:22.217681 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217699 | orchestrator | 2026-01-10 14:23:22.217718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217737 | orchestrator | Saturday 10 January 2026 14:23:20 +0000 (0:00:00.191) 0:00:34.384 ****** 2026-01-10 14:23:22.217756 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.217774 | orchestrator | 2026-01-10 14:23:22.217791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217811 | orchestrator | Saturday 10 January 2026 14:23:20 +0000 (0:00:00.227) 0:00:34.611 ****** 2026-01-10 14:23:22.217829 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:23:22.217848 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:23:22.217892 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:23:22.217911 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:23:22.217930 | orchestrator | 2026-01-10 14:23:22.217949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.217967 | orchestrator | Saturday 10 January 2026 14:23:21 +0000 (0:00:00.582) 0:00:35.194 ****** 2026-01-10 14:23:22.217985 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.218014 | orchestrator | 2026-01-10 14:23:22.218134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.218154 | orchestrator | Saturday 10 January 2026 14:23:21 +0000 (0:00:00.167) 0:00:35.361 ****** 2026-01-10 14:23:22.218173 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.218191 | orchestrator | 2026-01-10 14:23:22.218208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.218228 | orchestrator | Saturday 10 January 2026 14:23:21 +0000 (0:00:00.173) 0:00:35.534 ****** 2026-01-10 14:23:22.218246 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.218264 | orchestrator | 2026-01-10 14:23:22.218276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:23:22.218295 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.242) 0:00:35.777 ****** 2026-01-10 14:23:22.218314 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:22.218332 | orchestrator | 2026-01-10 14:23:22.218364 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:23:25.965437 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.202) 0:00:35.980 ****** 2026-01-10 14:23:25.965501 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:23:25.965507 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:23:25.965511 | orchestrator | 2026-01-10 14:23:25.965516 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:23:25.965520 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.121) 0:00:36.102 ****** 2026-01-10 14:23:25.965524 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965528 | orchestrator | 2026-01-10 14:23:25.965532 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:23:25.965536 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.093) 0:00:36.196 ****** 2026-01-10 14:23:25.965540 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965543 | orchestrator | 2026-01-10 14:23:25.965547 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:23:25.965551 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.094) 0:00:36.290 ****** 2026-01-10 14:23:25.965555 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965558 | orchestrator | 2026-01-10 14:23:25.965562 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:23:25.965566 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.258) 0:00:36.549 ****** 2026-01-10 14:23:25.965570 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:25.965574 | orchestrator | 2026-01-10 14:23:25.965579 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:23:25.965582 | orchestrator | Saturday 10 January 2026 14:23:22 +0000 (0:00:00.106) 0:00:36.655 ****** 2026-01-10 14:23:25.965587 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}}) 2026-01-10 14:23:25.965591 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8e61bc65-6745-5d05-9905-13a4cfa0641e'}}) 2026-01-10 14:23:25.965594 | orchestrator | 2026-01-10 14:23:25.965598 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:23:25.965602 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.221) 0:00:36.876 ****** 2026-01-10 14:23:25.965606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}})  2026-01-10 14:23:25.965611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8e61bc65-6745-5d05-9905-13a4cfa0641e'}})  2026-01-10 14:23:25.965615 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965619 | orchestrator | 2026-01-10 14:23:25.965623 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:23:25.965627 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.193) 0:00:37.070 ****** 2026-01-10 14:23:25.965647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}})  2026-01-10 14:23:25.965652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8e61bc65-6745-5d05-9905-13a4cfa0641e'}})  2026-01-10 14:23:25.965656 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965660 | orchestrator | 2026-01-10 14:23:25.965664 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:23:25.965668 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.169) 0:00:37.240 ****** 2026-01-10 14:23:25.965681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}})  2026-01-10 14:23:25.965686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8e61bc65-6745-5d05-9905-13a4cfa0641e'}})  2026-01-10 14:23:25.965690 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965694 | orchestrator | 2026-01-10 14:23:25.965698 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:23:25.965702 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.123) 0:00:37.363 ****** 2026-01-10 14:23:25.965706 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:25.965710 | orchestrator | 2026-01-10 14:23:25.965714 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:23:25.965718 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.130) 0:00:37.494 ****** 2026-01-10 14:23:25.965722 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:23:25.965726 | orchestrator | 2026-01-10 14:23:25.965730 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:23:25.965734 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.121) 0:00:37.615 ****** 2026-01-10 14:23:25.965738 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965742 | orchestrator | 2026-01-10 14:23:25.965746 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:23:25.965750 | orchestrator | Saturday 10 January 2026 14:23:23 +0000 (0:00:00.123) 0:00:37.739 ****** 2026-01-10 14:23:25.965754 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965758 | orchestrator | 2026-01-10 14:23:25.965762 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:23:25.965766 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.162) 0:00:37.901 ****** 2026-01-10 14:23:25.965770 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965774 | orchestrator | 2026-01-10 14:23:25.965778 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:23:25.965782 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.184) 0:00:38.086 ****** 2026-01-10 14:23:25.965786 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:23:25.965790 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:25.965795 | orchestrator |  "sdb": { 2026-01-10 14:23:25.965809 | orchestrator |  "osd_lvm_uuid": "f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f" 2026-01-10 14:23:25.965813 | orchestrator |  }, 2026-01-10 14:23:25.965818 | orchestrator |  "sdc": { 2026-01-10 14:23:25.965821 | orchestrator |  "osd_lvm_uuid": "8e61bc65-6745-5d05-9905-13a4cfa0641e" 2026-01-10 14:23:25.965826 | orchestrator |  } 2026-01-10 14:23:25.965830 | orchestrator |  } 2026-01-10 14:23:25.965834 | orchestrator | } 2026-01-10 14:23:25.965837 | orchestrator | 2026-01-10 14:23:25.965841 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:23:25.965845 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.135) 0:00:38.221 ****** 2026-01-10 14:23:25.965849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965853 | orchestrator | 2026-01-10 14:23:25.965894 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:23:25.965900 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.240) 0:00:38.462 ****** 2026-01-10 14:23:25.965913 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965920 | orchestrator | 2026-01-10 14:23:25.965925 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:23:25.965929 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.123) 0:00:38.585 ****** 2026-01-10 14:23:25.965932 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:23:25.965936 | orchestrator | 2026-01-10 14:23:25.965940 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:23:25.965943 | orchestrator | Saturday 10 January 2026 14:23:24 +0000 (0:00:00.125) 0:00:38.711 ****** 2026-01-10 14:23:25.965947 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:23:25.965950 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:23:25.965954 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:23:25.965958 | orchestrator |  "sdb": { 2026-01-10 14:23:25.965961 | orchestrator |  "osd_lvm_uuid": "f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f" 2026-01-10 14:23:25.965965 | orchestrator |  }, 2026-01-10 14:23:25.965969 | orchestrator |  "sdc": { 2026-01-10 14:23:25.965972 | orchestrator |  "osd_lvm_uuid": "8e61bc65-6745-5d05-9905-13a4cfa0641e" 2026-01-10 14:23:25.965976 | orchestrator |  } 2026-01-10 14:23:25.965980 | orchestrator |  }, 2026-01-10 14:23:25.965983 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:23:25.965987 | orchestrator |  { 2026-01-10 14:23:25.965991 | orchestrator |  "data": "osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f", 2026-01-10 14:23:25.965994 | orchestrator |  "data_vg": "ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f" 2026-01-10 14:23:25.965998 | orchestrator |  }, 2026-01-10 14:23:25.966002 | orchestrator |  { 2026-01-10 14:23:25.966006 | orchestrator |  "data": "osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e", 2026-01-10 14:23:25.966010 | orchestrator |  "data_vg": "ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e" 2026-01-10 14:23:25.966014 | orchestrator |  } 2026-01-10 14:23:25.966054 | orchestrator |  ] 2026-01-10 14:23:25.966059 | orchestrator |  } 2026-01-10 14:23:25.966063 | orchestrator | } 2026-01-10 14:23:25.966067 | orchestrator | 2026-01-10 14:23:25.966072 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:23:25.966076 | orchestrator | Saturday 10 January 2026 14:23:25 +0000 (0:00:00.193) 0:00:38.905 ****** 2026-01-10 14:23:25.966080 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:23:25.966085 | orchestrator | 2026-01-10 14:23:25.966089 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:23:25.966094 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:23:25.966099 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:23:25.966103 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:23:25.966106 | orchestrator | 2026-01-10 14:23:25.966110 | orchestrator | 2026-01-10 14:23:25.966114 | orchestrator | 2026-01-10 14:23:25.966118 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:23:25.966121 | orchestrator | Saturday 10 January 2026 14:23:25 +0000 (0:00:00.816) 0:00:39.721 ****** 2026-01-10 14:23:25.966125 | orchestrator | =============================================================================== 2026-01-10 14:23:25.966129 | orchestrator | Write configuration file ------------------------------------------------ 3.59s 2026-01-10 14:23:25.966132 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-01-10 14:23:25.966136 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-01-10 14:23:25.966140 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-01-10 14:23:25.966148 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2026-01-10 14:23:25.966152 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-01-10 14:23:25.966155 | orchestrator | Print configuration data ------------------------------------------------ 0.82s 2026-01-10 14:23:25.966159 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-01-10 14:23:25.966163 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-01-10 14:23:25.966167 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-10 14:23:25.966170 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-01-10 14:23:25.966174 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-10 14:23:25.966177 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2026-01-10 14:23:25.966185 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-10 14:23:26.186403 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-10 14:23:26.186514 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-01-10 14:23:26.186529 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-01-10 14:23:26.186541 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.54s 2026-01-10 14:23:26.186552 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.51s 2026-01-10 14:23:26.186564 | orchestrator | Set DB devices config data ---------------------------------------------- 0.51s 2026-01-10 14:23:48.557500 | orchestrator | 2026-01-10 14:23:48 | INFO  | Task decfcbc9-f14a-4d50-8b54-da462581f494 (sync inventory) is running in background. Output coming soon. 2026-01-10 14:24:15.215986 | orchestrator | 2026-01-10 14:23:50 | INFO  | Starting group_vars file reorganization 2026-01-10 14:24:15.216127 | orchestrator | 2026-01-10 14:23:50 | INFO  | Moved 0 file(s) to their respective directories 2026-01-10 14:24:15.216151 | orchestrator | 2026-01-10 14:23:50 | INFO  | Group_vars file reorganization completed 2026-01-10 14:24:15.216168 | orchestrator | 2026-01-10 14:23:52 | INFO  | Starting variable preparation from inventory 2026-01-10 14:24:15.216186 | orchestrator | 2026-01-10 14:23:55 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-10 14:24:15.216203 | orchestrator | 2026-01-10 14:23:55 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-10 14:24:15.216248 | orchestrator | 2026-01-10 14:23:55 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-10 14:24:15.216265 | orchestrator | 2026-01-10 14:23:55 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-10 14:24:15.216282 | orchestrator | 2026-01-10 14:23:55 | INFO  | Variable preparation completed 2026-01-10 14:24:15.216298 | orchestrator | 2026-01-10 14:23:56 | INFO  | Starting inventory overwrite handling 2026-01-10 14:24:15.216322 | orchestrator | 2026-01-10 14:23:56 | INFO  | Handling group overwrites in 99-overwrite 2026-01-10 14:24:15.216339 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removing group frr:children from 60-generic 2026-01-10 14:24:15.216356 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-10 14:24:15.216373 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-10 14:24:15.216389 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-10 14:24:15.216404 | orchestrator | 2026-01-10 14:23:56 | INFO  | Handling group overwrites in 20-roles 2026-01-10 14:24:15.216448 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-10 14:24:15.216463 | orchestrator | 2026-01-10 14:23:56 | INFO  | Removed 5 group(s) in total 2026-01-10 14:24:15.216476 | orchestrator | 2026-01-10 14:23:56 | INFO  | Inventory overwrite handling completed 2026-01-10 14:24:15.216489 | orchestrator | 2026-01-10 14:23:58 | INFO  | Starting merge of inventory files 2026-01-10 14:24:15.216502 | orchestrator | 2026-01-10 14:23:58 | INFO  | Inventory files merged successfully 2026-01-10 14:24:15.216514 | orchestrator | 2026-01-10 14:24:03 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-10 14:24:15.216527 | orchestrator | 2026-01-10 14:24:13 | INFO  | Successfully wrote ClusterShell configuration 2026-01-10 14:24:15.216540 | orchestrator | [master 2cb7ecc] 2026-01-10-14-24 2026-01-10 14:24:15.216555 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-10 14:24:17.656106 | orchestrator | 2026-01-10 14:24:17 | INFO  | Task d818df27-b891-4a00-8677-f257c5471702 (ceph-create-lvm-devices) was prepared for execution. 2026-01-10 14:24:17.656230 | orchestrator | 2026-01-10 14:24:17 | INFO  | It takes a moment until task d818df27-b891-4a00-8677-f257c5471702 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-10 14:24:29.237545 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:24:29.237647 | orchestrator | 2.16.14 2026-01-10 14:24:29.237661 | orchestrator | 2026-01-10 14:24:29.237669 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:24:29.237678 | orchestrator | 2026-01-10 14:24:29.237686 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:24:29.237694 | orchestrator | Saturday 10 January 2026 14:24:22 +0000 (0:00:00.298) 0:00:00.298 ****** 2026-01-10 14:24:29.237702 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:24:29.237709 | orchestrator | 2026-01-10 14:24:29.237716 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:24:29.237724 | orchestrator | Saturday 10 January 2026 14:24:22 +0000 (0:00:00.219) 0:00:00.517 ****** 2026-01-10 14:24:29.237731 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:29.237739 | orchestrator | 2026-01-10 14:24:29.237747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.237755 | orchestrator | Saturday 10 January 2026 14:24:22 +0000 (0:00:00.217) 0:00:00.735 ****** 2026-01-10 14:24:29.237762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:24:29.237770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:24:29.237777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:24:29.237784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:24:29.237791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:24:29.237798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:24:29.237805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:24:29.237860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:24:29.237870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:24:29.237877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:24:29.237884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:24:29.237891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:24:29.237918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:24:29.237926 | orchestrator | 2026-01-10 14:24:29.237933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.237941 | orchestrator | Saturday 10 January 2026 14:24:22 +0000 (0:00:00.447) 0:00:01.183 ****** 2026-01-10 14:24:29.237948 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.237955 | orchestrator | 2026-01-10 14:24:29.237962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.237969 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.166) 0:00:01.350 ****** 2026-01-10 14:24:29.237976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.237984 | orchestrator | 2026-01-10 14:24:29.237991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.237999 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.178) 0:00:01.528 ****** 2026-01-10 14:24:29.238006 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238013 | orchestrator | 2026-01-10 14:24:29.238070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238078 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.168) 0:00:01.696 ****** 2026-01-10 14:24:29.238085 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238093 | orchestrator | 2026-01-10 14:24:29.238102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238110 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.195) 0:00:01.892 ****** 2026-01-10 14:24:29.238118 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238126 | orchestrator | 2026-01-10 14:24:29.238134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238142 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.177) 0:00:02.070 ****** 2026-01-10 14:24:29.238150 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238159 | orchestrator | 2026-01-10 14:24:29.238167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238175 | orchestrator | Saturday 10 January 2026 14:24:23 +0000 (0:00:00.172) 0:00:02.242 ****** 2026-01-10 14:24:29.238182 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238190 | orchestrator | 2026-01-10 14:24:29.238198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238207 | orchestrator | Saturday 10 January 2026 14:24:24 +0000 (0:00:00.202) 0:00:02.445 ****** 2026-01-10 14:24:29.238215 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238222 | orchestrator | 2026-01-10 14:24:29.238231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238239 | orchestrator | Saturday 10 January 2026 14:24:24 +0000 (0:00:00.193) 0:00:02.638 ****** 2026-01-10 14:24:29.238247 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196) 2026-01-10 14:24:29.238257 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196) 2026-01-10 14:24:29.238265 | orchestrator | 2026-01-10 14:24:29.238274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238295 | orchestrator | Saturday 10 January 2026 14:24:24 +0000 (0:00:00.458) 0:00:03.097 ****** 2026-01-10 14:24:29.238304 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe) 2026-01-10 14:24:29.238312 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe) 2026-01-10 14:24:29.238320 | orchestrator | 2026-01-10 14:24:29.238328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238337 | orchestrator | Saturday 10 January 2026 14:24:25 +0000 (0:00:00.688) 0:00:03.785 ****** 2026-01-10 14:24:29.238345 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341) 2026-01-10 14:24:29.238359 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341) 2026-01-10 14:24:29.238367 | orchestrator | 2026-01-10 14:24:29.238375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238383 | orchestrator | Saturday 10 January 2026 14:24:26 +0000 (0:00:00.574) 0:00:04.360 ****** 2026-01-10 14:24:29.238391 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a) 2026-01-10 14:24:29.238399 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a) 2026-01-10 14:24:29.238408 | orchestrator | 2026-01-10 14:24:29.238416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:29.238424 | orchestrator | Saturday 10 January 2026 14:24:26 +0000 (0:00:00.882) 0:00:05.243 ****** 2026-01-10 14:24:29.238432 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:24:29.238441 | orchestrator | 2026-01-10 14:24:29.238449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238456 | orchestrator | Saturday 10 January 2026 14:24:27 +0000 (0:00:00.339) 0:00:05.582 ****** 2026-01-10 14:24:29.238463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:24:29.238470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:24:29.238477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:24:29.238498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:24:29.238506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:24:29.238513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:24:29.238520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:24:29.238527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:24:29.238534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:24:29.238541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:24:29.238548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:24:29.238558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:24:29.238565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:24:29.238573 | orchestrator | 2026-01-10 14:24:29.238580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238587 | orchestrator | Saturday 10 January 2026 14:24:27 +0000 (0:00:00.474) 0:00:06.057 ****** 2026-01-10 14:24:29.238594 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238601 | orchestrator | 2026-01-10 14:24:29.238608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238615 | orchestrator | Saturday 10 January 2026 14:24:28 +0000 (0:00:00.218) 0:00:06.275 ****** 2026-01-10 14:24:29.238622 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238629 | orchestrator | 2026-01-10 14:24:29.238636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238643 | orchestrator | Saturday 10 January 2026 14:24:28 +0000 (0:00:00.212) 0:00:06.487 ****** 2026-01-10 14:24:29.238650 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238657 | orchestrator | 2026-01-10 14:24:29.238664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238671 | orchestrator | Saturday 10 January 2026 14:24:28 +0000 (0:00:00.200) 0:00:06.688 ****** 2026-01-10 14:24:29.238683 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238690 | orchestrator | 2026-01-10 14:24:29.238697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238704 | orchestrator | Saturday 10 January 2026 14:24:28 +0000 (0:00:00.206) 0:00:06.894 ****** 2026-01-10 14:24:29.238712 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238719 | orchestrator | 2026-01-10 14:24:29.238726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238733 | orchestrator | Saturday 10 January 2026 14:24:28 +0000 (0:00:00.210) 0:00:07.105 ****** 2026-01-10 14:24:29.238740 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238747 | orchestrator | 2026-01-10 14:24:29.238754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:29.238761 | orchestrator | Saturday 10 January 2026 14:24:29 +0000 (0:00:00.185) 0:00:07.290 ****** 2026-01-10 14:24:29.238768 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:29.238775 | orchestrator | 2026-01-10 14:24:29.238786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595386 | orchestrator | Saturday 10 January 2026 14:24:29 +0000 (0:00:00.190) 0:00:07.480 ****** 2026-01-10 14:24:37.595490 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595504 | orchestrator | 2026-01-10 14:24:37.595514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595524 | orchestrator | Saturday 10 January 2026 14:24:29 +0000 (0:00:00.204) 0:00:07.685 ****** 2026-01-10 14:24:37.595533 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:24:37.595543 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:24:37.595552 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:24:37.595560 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:24:37.595569 | orchestrator | 2026-01-10 14:24:37.595578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595586 | orchestrator | Saturday 10 January 2026 14:24:30 +0000 (0:00:01.053) 0:00:08.738 ****** 2026-01-10 14:24:37.595595 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595603 | orchestrator | 2026-01-10 14:24:37.595612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595621 | orchestrator | Saturday 10 January 2026 14:24:30 +0000 (0:00:00.226) 0:00:08.965 ****** 2026-01-10 14:24:37.595629 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595638 | orchestrator | 2026-01-10 14:24:37.595646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595655 | orchestrator | Saturday 10 January 2026 14:24:30 +0000 (0:00:00.226) 0:00:09.192 ****** 2026-01-10 14:24:37.595664 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595673 | orchestrator | 2026-01-10 14:24:37.595682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:24:37.595690 | orchestrator | Saturday 10 January 2026 14:24:31 +0000 (0:00:00.204) 0:00:09.397 ****** 2026-01-10 14:24:37.595699 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595707 | orchestrator | 2026-01-10 14:24:37.595716 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:24:37.595725 | orchestrator | Saturday 10 January 2026 14:24:31 +0000 (0:00:00.210) 0:00:09.607 ****** 2026-01-10 14:24:37.595733 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595741 | orchestrator | 2026-01-10 14:24:37.595750 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:24:37.595759 | orchestrator | Saturday 10 January 2026 14:24:31 +0000 (0:00:00.143) 0:00:09.751 ****** 2026-01-10 14:24:37.595768 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}}) 2026-01-10 14:24:37.595777 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb55798-e032-5872-951c-62472db4891e'}}) 2026-01-10 14:24:37.595786 | orchestrator | 2026-01-10 14:24:37.595794 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:24:37.595866 | orchestrator | Saturday 10 January 2026 14:24:31 +0000 (0:00:00.202) 0:00:09.953 ****** 2026-01-10 14:24:37.595879 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}) 2026-01-10 14:24:37.595889 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'}) 2026-01-10 14:24:37.595898 | orchestrator | 2026-01-10 14:24:37.595906 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:24:37.595915 | orchestrator | Saturday 10 January 2026 14:24:33 +0000 (0:00:01.977) 0:00:11.931 ****** 2026-01-10 14:24:37.595924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.595937 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.595947 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.595956 | orchestrator | 2026-01-10 14:24:37.595966 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:24:37.595976 | orchestrator | Saturday 10 January 2026 14:24:33 +0000 (0:00:00.188) 0:00:12.120 ****** 2026-01-10 14:24:37.595986 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}) 2026-01-10 14:24:37.595996 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'}) 2026-01-10 14:24:37.596006 | orchestrator | 2026-01-10 14:24:37.596016 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:24:37.596027 | orchestrator | Saturday 10 January 2026 14:24:35 +0000 (0:00:01.449) 0:00:13.570 ****** 2026-01-10 14:24:37.596036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596056 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596065 | orchestrator | 2026-01-10 14:24:37.596075 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:24:37.596085 | orchestrator | Saturday 10 January 2026 14:24:35 +0000 (0:00:00.175) 0:00:13.745 ****** 2026-01-10 14:24:37.596111 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596121 | orchestrator | 2026-01-10 14:24:37.596130 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:24:37.596141 | orchestrator | Saturday 10 January 2026 14:24:35 +0000 (0:00:00.145) 0:00:13.890 ****** 2026-01-10 14:24:37.596150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596169 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596179 | orchestrator | 2026-01-10 14:24:37.596188 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:24:37.596198 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.397) 0:00:14.288 ****** 2026-01-10 14:24:37.596207 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596217 | orchestrator | 2026-01-10 14:24:37.596227 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:24:37.596236 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.150) 0:00:14.439 ****** 2026-01-10 14:24:37.596253 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596283 | orchestrator | 2026-01-10 14:24:37.596293 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:24:37.596302 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.169) 0:00:14.608 ****** 2026-01-10 14:24:37.596310 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596319 | orchestrator | 2026-01-10 14:24:37.596327 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:24:37.596336 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.155) 0:00:14.763 ****** 2026-01-10 14:24:37.596345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596353 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596362 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596370 | orchestrator | 2026-01-10 14:24:37.596379 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:24:37.596387 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.164) 0:00:14.928 ****** 2026-01-10 14:24:37.596396 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:37.596405 | orchestrator | 2026-01-10 14:24:37.596413 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:24:37.596437 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.176) 0:00:15.105 ****** 2026-01-10 14:24:37.596451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596469 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596477 | orchestrator | 2026-01-10 14:24:37.596486 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:24:37.596495 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.186) 0:00:15.291 ****** 2026-01-10 14:24:37.596503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596521 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596529 | orchestrator | 2026-01-10 14:24:37.596538 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:24:37.596547 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.189) 0:00:15.480 ****** 2026-01-10 14:24:37.596555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:37.596564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:37.596572 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596581 | orchestrator | 2026-01-10 14:24:37.596590 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:24:37.596604 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.194) 0:00:15.675 ****** 2026-01-10 14:24:37.596612 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:37.596621 | orchestrator | 2026-01-10 14:24:37.596629 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:24:37.596644 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.168) 0:00:15.844 ****** 2026-01-10 14:24:44.588151 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588246 | orchestrator | 2026-01-10 14:24:44.588257 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:24:44.588266 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.169) 0:00:16.014 ****** 2026-01-10 14:24:44.588272 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588278 | orchestrator | 2026-01-10 14:24:44.588284 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:24:44.588291 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.157) 0:00:16.172 ****** 2026-01-10 14:24:44.588297 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:24:44.588305 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:24:44.588311 | orchestrator | } 2026-01-10 14:24:44.588319 | orchestrator | 2026-01-10 14:24:44.588329 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:24:44.588339 | orchestrator | Saturday 10 January 2026 14:24:38 +0000 (0:00:00.379) 0:00:16.551 ****** 2026-01-10 14:24:44.588349 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:24:44.588358 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:24:44.588368 | orchestrator | } 2026-01-10 14:24:44.588375 | orchestrator | 2026-01-10 14:24:44.588381 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:24:44.588389 | orchestrator | Saturday 10 January 2026 14:24:38 +0000 (0:00:00.144) 0:00:16.696 ****** 2026-01-10 14:24:44.588397 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:24:44.588404 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:24:44.588411 | orchestrator | } 2026-01-10 14:24:44.588416 | orchestrator | 2026-01-10 14:24:44.588423 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:24:44.588430 | orchestrator | Saturday 10 January 2026 14:24:38 +0000 (0:00:00.140) 0:00:16.836 ****** 2026-01-10 14:24:44.588436 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:44.588443 | orchestrator | 2026-01-10 14:24:44.588449 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:24:44.588456 | orchestrator | Saturday 10 January 2026 14:24:39 +0000 (0:00:00.709) 0:00:17.546 ****** 2026-01-10 14:24:44.588462 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:44.588468 | orchestrator | 2026-01-10 14:24:44.588475 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:24:44.588483 | orchestrator | Saturday 10 January 2026 14:24:39 +0000 (0:00:00.538) 0:00:18.085 ****** 2026-01-10 14:24:44.588490 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:44.588496 | orchestrator | 2026-01-10 14:24:44.588504 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:24:44.588511 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.514) 0:00:18.600 ****** 2026-01-10 14:24:44.588518 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:44.588524 | orchestrator | 2026-01-10 14:24:44.588532 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:24:44.588539 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.146) 0:00:18.746 ****** 2026-01-10 14:24:44.588546 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588553 | orchestrator | 2026-01-10 14:24:44.588560 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:24:44.588567 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.110) 0:00:18.856 ****** 2026-01-10 14:24:44.588574 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588580 | orchestrator | 2026-01-10 14:24:44.588587 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:24:44.588628 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.106) 0:00:18.963 ****** 2026-01-10 14:24:44.588635 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:24:44.588641 | orchestrator |  "vgs_report": { 2026-01-10 14:24:44.588647 | orchestrator |  "vg": [] 2026-01-10 14:24:44.588654 | orchestrator |  } 2026-01-10 14:24:44.588661 | orchestrator | } 2026-01-10 14:24:44.588668 | orchestrator | 2026-01-10 14:24:44.588696 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:24:44.588703 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.140) 0:00:19.103 ****** 2026-01-10 14:24:44.588716 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588723 | orchestrator | 2026-01-10 14:24:44.588730 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:24:44.588737 | orchestrator | Saturday 10 January 2026 14:24:40 +0000 (0:00:00.123) 0:00:19.227 ****** 2026-01-10 14:24:44.588744 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588751 | orchestrator | 2026-01-10 14:24:44.588759 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:24:44.588765 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:00.141) 0:00:19.369 ****** 2026-01-10 14:24:44.588772 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588778 | orchestrator | 2026-01-10 14:24:44.588784 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:24:44.588790 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:00.340) 0:00:19.709 ****** 2026-01-10 14:24:44.588796 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588834 | orchestrator | 2026-01-10 14:24:44.588840 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:24:44.588847 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:00.153) 0:00:19.863 ****** 2026-01-10 14:24:44.588854 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588862 | orchestrator | 2026-01-10 14:24:44.588869 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:24:44.588876 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:00.173) 0:00:20.036 ****** 2026-01-10 14:24:44.588882 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588888 | orchestrator | 2026-01-10 14:24:44.588895 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:24:44.588902 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:00.130) 0:00:20.167 ****** 2026-01-10 14:24:44.588909 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588916 | orchestrator | 2026-01-10 14:24:44.588924 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:24:44.588937 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.142) 0:00:20.310 ****** 2026-01-10 14:24:44.588969 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.588974 | orchestrator | 2026-01-10 14:24:44.588983 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:24:44.588990 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.173) 0:00:20.483 ****** 2026-01-10 14:24:44.588997 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589003 | orchestrator | 2026-01-10 14:24:44.589010 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:24:44.589016 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.134) 0:00:20.619 ****** 2026-01-10 14:24:44.589023 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589029 | orchestrator | 2026-01-10 14:24:44.589036 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:24:44.589042 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.159) 0:00:20.779 ****** 2026-01-10 14:24:44.589049 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589056 | orchestrator | 2026-01-10 14:24:44.589063 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:24:44.589069 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.165) 0:00:20.944 ****** 2026-01-10 14:24:44.589084 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589090 | orchestrator | 2026-01-10 14:24:44.589097 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:24:44.589104 | orchestrator | Saturday 10 January 2026 14:24:42 +0000 (0:00:00.151) 0:00:21.095 ****** 2026-01-10 14:24:44.589111 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589117 | orchestrator | 2026-01-10 14:24:44.589124 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:24:44.589131 | orchestrator | Saturday 10 January 2026 14:24:43 +0000 (0:00:00.166) 0:00:21.262 ****** 2026-01-10 14:24:44.589138 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589145 | orchestrator | 2026-01-10 14:24:44.589152 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:24:44.589159 | orchestrator | Saturday 10 January 2026 14:24:43 +0000 (0:00:00.185) 0:00:21.447 ****** 2026-01-10 14:24:44.589167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:44.589175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:44.589181 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589188 | orchestrator | 2026-01-10 14:24:44.589196 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:24:44.589203 | orchestrator | Saturday 10 January 2026 14:24:43 +0000 (0:00:00.516) 0:00:21.964 ****** 2026-01-10 14:24:44.589210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:44.589217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:44.589223 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589229 | orchestrator | 2026-01-10 14:24:44.589236 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:24:44.589243 | orchestrator | Saturday 10 January 2026 14:24:43 +0000 (0:00:00.161) 0:00:22.126 ****** 2026-01-10 14:24:44.589250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:44.589257 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:44.589264 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589271 | orchestrator | 2026-01-10 14:24:44.589276 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:24:44.589282 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.201) 0:00:22.327 ****** 2026-01-10 14:24:44.589288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:44.589295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:44.589300 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589306 | orchestrator | 2026-01-10 14:24:44.589313 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:24:44.589319 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.177) 0:00:22.505 ****** 2026-01-10 14:24:44.589325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:44.589331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:44.589342 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:44.589361 | orchestrator | 2026-01-10 14:24:44.589368 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:24:44.589382 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.176) 0:00:22.681 ****** 2026-01-10 14:24:44.589395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076049 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076138 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076145 | orchestrator | 2026-01-10 14:24:50.076151 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:24:50.076157 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.156) 0:00:22.838 ****** 2026-01-10 14:24:50.076161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076169 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076173 | orchestrator | 2026-01-10 14:24:50.076177 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:24:50.076181 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.177) 0:00:23.015 ****** 2026-01-10 14:24:50.076185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076193 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076197 | orchestrator | 2026-01-10 14:24:50.076201 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:24:50.076204 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.194) 0:00:23.210 ****** 2026-01-10 14:24:50.076208 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:50.076213 | orchestrator | 2026-01-10 14:24:50.076216 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:24:50.076220 | orchestrator | Saturday 10 January 2026 14:24:45 +0000 (0:00:00.534) 0:00:23.744 ****** 2026-01-10 14:24:50.076224 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:50.076228 | orchestrator | 2026-01-10 14:24:50.076231 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:24:50.076235 | orchestrator | Saturday 10 January 2026 14:24:46 +0000 (0:00:00.571) 0:00:24.315 ****** 2026-01-10 14:24:50.076239 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:50.076242 | orchestrator | 2026-01-10 14:24:50.076246 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:24:50.076250 | orchestrator | Saturday 10 January 2026 14:24:46 +0000 (0:00:00.186) 0:00:24.501 ****** 2026-01-10 14:24:50.076254 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'vg_name': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}) 2026-01-10 14:24:50.076270 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'vg_name': 'ceph-aeb55798-e032-5872-951c-62472db4891e'}) 2026-01-10 14:24:50.076274 | orchestrator | 2026-01-10 14:24:50.076278 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:24:50.076282 | orchestrator | Saturday 10 January 2026 14:24:46 +0000 (0:00:00.201) 0:00:24.703 ****** 2026-01-10 14:24:50.076301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076309 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076313 | orchestrator | 2026-01-10 14:24:50.076316 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:24:50.076320 | orchestrator | Saturday 10 January 2026 14:24:46 +0000 (0:00:00.357) 0:00:25.061 ****** 2026-01-10 14:24:50.076324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076331 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076336 | orchestrator | 2026-01-10 14:24:50.076340 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:24:50.076343 | orchestrator | Saturday 10 January 2026 14:24:46 +0000 (0:00:00.156) 0:00:25.217 ****** 2026-01-10 14:24:50.076347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'})  2026-01-10 14:24:50.076351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'})  2026-01-10 14:24:50.076355 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:50.076359 | orchestrator | 2026-01-10 14:24:50.076362 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:24:50.076366 | orchestrator | Saturday 10 January 2026 14:24:47 +0000 (0:00:00.163) 0:00:25.381 ****** 2026-01-10 14:24:50.076379 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:24:50.076383 | orchestrator |  "lvm_report": { 2026-01-10 14:24:50.076387 | orchestrator |  "lv": [ 2026-01-10 14:24:50.076391 | orchestrator |  { 2026-01-10 14:24:50.076395 | orchestrator |  "lv_name": "osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e", 2026-01-10 14:24:50.076400 | orchestrator |  "vg_name": "ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e" 2026-01-10 14:24:50.076404 | orchestrator |  }, 2026-01-10 14:24:50.076407 | orchestrator |  { 2026-01-10 14:24:50.076411 | orchestrator |  "lv_name": "osd-block-aeb55798-e032-5872-951c-62472db4891e", 2026-01-10 14:24:50.076415 | orchestrator |  "vg_name": "ceph-aeb55798-e032-5872-951c-62472db4891e" 2026-01-10 14:24:50.076419 | orchestrator |  } 2026-01-10 14:24:50.076422 | orchestrator |  ], 2026-01-10 14:24:50.076426 | orchestrator |  "pv": [ 2026-01-10 14:24:50.076430 | orchestrator |  { 2026-01-10 14:24:50.076434 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:24:50.076437 | orchestrator |  "vg_name": "ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e" 2026-01-10 14:24:50.076441 | orchestrator |  }, 2026-01-10 14:24:50.076445 | orchestrator |  { 2026-01-10 14:24:50.076448 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:24:50.076452 | orchestrator |  "vg_name": "ceph-aeb55798-e032-5872-951c-62472db4891e" 2026-01-10 14:24:50.076456 | orchestrator |  } 2026-01-10 14:24:50.076459 | orchestrator |  ] 2026-01-10 14:24:50.076463 | orchestrator |  } 2026-01-10 14:24:50.076467 | orchestrator | } 2026-01-10 14:24:50.076472 | orchestrator | 2026-01-10 14:24:50.076475 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:24:50.076479 | orchestrator | 2026-01-10 14:24:50.076483 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:24:50.076490 | orchestrator | Saturday 10 January 2026 14:24:47 +0000 (0:00:00.298) 0:00:25.679 ****** 2026-01-10 14:24:50.076494 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:24:50.076497 | orchestrator | 2026-01-10 14:24:50.076501 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:24:50.076505 | orchestrator | Saturday 10 January 2026 14:24:47 +0000 (0:00:00.275) 0:00:25.954 ****** 2026-01-10 14:24:50.076509 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:24:50.076512 | orchestrator | 2026-01-10 14:24:50.076516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076520 | orchestrator | Saturday 10 January 2026 14:24:47 +0000 (0:00:00.243) 0:00:26.198 ****** 2026-01-10 14:24:50.076524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:24:50.076527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:24:50.076531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:24:50.076545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:24:50.076548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:24:50.076552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:24:50.076559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:24:50.076563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:24:50.076566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:24:50.076570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:24:50.076574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:24:50.076577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:24:50.076581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:24:50.076585 | orchestrator | 2026-01-10 14:24:50.076588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076592 | orchestrator | Saturday 10 January 2026 14:24:48 +0000 (0:00:00.437) 0:00:26.635 ****** 2026-01-10 14:24:50.076596 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076599 | orchestrator | 2026-01-10 14:24:50.076603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076608 | orchestrator | Saturday 10 January 2026 14:24:48 +0000 (0:00:00.212) 0:00:26.847 ****** 2026-01-10 14:24:50.076612 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076616 | orchestrator | 2026-01-10 14:24:50.076620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076624 | orchestrator | Saturday 10 January 2026 14:24:48 +0000 (0:00:00.197) 0:00:27.044 ****** 2026-01-10 14:24:50.076629 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076633 | orchestrator | 2026-01-10 14:24:50.076637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076641 | orchestrator | Saturday 10 January 2026 14:24:49 +0000 (0:00:00.642) 0:00:27.686 ****** 2026-01-10 14:24:50.076646 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076650 | orchestrator | 2026-01-10 14:24:50.076654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076659 | orchestrator | Saturday 10 January 2026 14:24:49 +0000 (0:00:00.213) 0:00:27.900 ****** 2026-01-10 14:24:50.076663 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076667 | orchestrator | 2026-01-10 14:24:50.076671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:24:50.076679 | orchestrator | Saturday 10 January 2026 14:24:49 +0000 (0:00:00.201) 0:00:28.101 ****** 2026-01-10 14:24:50.076683 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:50.076688 | orchestrator | 2026-01-10 14:24:50.076695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.969633 | orchestrator | Saturday 10 January 2026 14:24:50 +0000 (0:00:00.222) 0:00:28.324 ****** 2026-01-10 14:25:01.969765 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.969790 | orchestrator | 2026-01-10 14:25:01.969875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.969894 | orchestrator | Saturday 10 January 2026 14:24:50 +0000 (0:00:00.210) 0:00:28.535 ****** 2026-01-10 14:25:01.969912 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.969931 | orchestrator | 2026-01-10 14:25:01.969949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.969967 | orchestrator | Saturday 10 January 2026 14:24:50 +0000 (0:00:00.198) 0:00:28.734 ****** 2026-01-10 14:25:01.969985 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e) 2026-01-10 14:25:01.970004 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e) 2026-01-10 14:25:01.970096 | orchestrator | 2026-01-10 14:25:01.970119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.970140 | orchestrator | Saturday 10 January 2026 14:24:50 +0000 (0:00:00.392) 0:00:29.126 ****** 2026-01-10 14:25:01.970161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e) 2026-01-10 14:25:01.970185 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e) 2026-01-10 14:25:01.970207 | orchestrator | 2026-01-10 14:25:01.970229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.970250 | orchestrator | Saturday 10 January 2026 14:24:51 +0000 (0:00:00.415) 0:00:29.542 ****** 2026-01-10 14:25:01.970274 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2) 2026-01-10 14:25:01.970296 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2) 2026-01-10 14:25:01.970319 | orchestrator | 2026-01-10 14:25:01.970338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.970361 | orchestrator | Saturday 10 January 2026 14:24:51 +0000 (0:00:00.442) 0:00:29.984 ****** 2026-01-10 14:25:01.970382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c) 2026-01-10 14:25:01.970402 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c) 2026-01-10 14:25:01.970422 | orchestrator | 2026-01-10 14:25:01.970441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:01.970461 | orchestrator | Saturday 10 January 2026 14:24:52 +0000 (0:00:00.742) 0:00:30.727 ****** 2026-01-10 14:25:01.970481 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:25:01.970500 | orchestrator | 2026-01-10 14:25:01.970520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.970540 | orchestrator | Saturday 10 January 2026 14:24:53 +0000 (0:00:00.592) 0:00:31.320 ****** 2026-01-10 14:25:01.970558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:25:01.970577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:25:01.970595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:25:01.970613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:25:01.970629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:25:01.970754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:25:01.970777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:25:01.970873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:25:01.970900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:25:01.970920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:25:01.970938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:25:01.970956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:25:01.970975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:25:01.970993 | orchestrator | 2026-01-10 14:25:01.971011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971030 | orchestrator | Saturday 10 January 2026 14:24:53 +0000 (0:00:00.876) 0:00:32.196 ****** 2026-01-10 14:25:01.971047 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971065 | orchestrator | 2026-01-10 14:25:01.971085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971103 | orchestrator | Saturday 10 January 2026 14:24:54 +0000 (0:00:00.211) 0:00:32.407 ****** 2026-01-10 14:25:01.971122 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971140 | orchestrator | 2026-01-10 14:25:01.971158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971176 | orchestrator | Saturday 10 January 2026 14:24:54 +0000 (0:00:00.214) 0:00:32.622 ****** 2026-01-10 14:25:01.971191 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971202 | orchestrator | 2026-01-10 14:25:01.971239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971250 | orchestrator | Saturday 10 January 2026 14:24:54 +0000 (0:00:00.221) 0:00:32.844 ****** 2026-01-10 14:25:01.971261 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971271 | orchestrator | 2026-01-10 14:25:01.971282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971293 | orchestrator | Saturday 10 January 2026 14:24:54 +0000 (0:00:00.223) 0:00:33.068 ****** 2026-01-10 14:25:01.971304 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971315 | orchestrator | 2026-01-10 14:25:01.971325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971336 | orchestrator | Saturday 10 January 2026 14:24:55 +0000 (0:00:00.240) 0:00:33.308 ****** 2026-01-10 14:25:01.971347 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971357 | orchestrator | 2026-01-10 14:25:01.971368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971379 | orchestrator | Saturday 10 January 2026 14:24:55 +0000 (0:00:00.246) 0:00:33.554 ****** 2026-01-10 14:25:01.971389 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971400 | orchestrator | 2026-01-10 14:25:01.971411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971422 | orchestrator | Saturday 10 January 2026 14:24:55 +0000 (0:00:00.224) 0:00:33.779 ****** 2026-01-10 14:25:01.971432 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971443 | orchestrator | 2026-01-10 14:25:01.971454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971464 | orchestrator | Saturday 10 January 2026 14:24:55 +0000 (0:00:00.208) 0:00:33.987 ****** 2026-01-10 14:25:01.971475 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:25:01.971486 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:25:01.971498 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:25:01.971508 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:25:01.971532 | orchestrator | 2026-01-10 14:25:01.971542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971552 | orchestrator | Saturday 10 January 2026 14:24:56 +0000 (0:00:00.997) 0:00:34.985 ****** 2026-01-10 14:25:01.971562 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971571 | orchestrator | 2026-01-10 14:25:01.971581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971590 | orchestrator | Saturday 10 January 2026 14:24:56 +0000 (0:00:00.205) 0:00:35.190 ****** 2026-01-10 14:25:01.971599 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971608 | orchestrator | 2026-01-10 14:25:01.971618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971627 | orchestrator | Saturday 10 January 2026 14:24:57 +0000 (0:00:00.725) 0:00:35.916 ****** 2026-01-10 14:25:01.971637 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971646 | orchestrator | 2026-01-10 14:25:01.971656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:01.971665 | orchestrator | Saturday 10 January 2026 14:24:57 +0000 (0:00:00.250) 0:00:36.167 ****** 2026-01-10 14:25:01.971675 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971684 | orchestrator | 2026-01-10 14:25:01.971693 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:25:01.971712 | orchestrator | Saturday 10 January 2026 14:24:58 +0000 (0:00:00.243) 0:00:36.411 ****** 2026-01-10 14:25:01.971721 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971731 | orchestrator | 2026-01-10 14:25:01.971741 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:25:01.971750 | orchestrator | Saturday 10 January 2026 14:24:58 +0000 (0:00:00.146) 0:00:36.557 ****** 2026-01-10 14:25:01.971759 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '381f50a6-56c2-5a32-835b-1a08246466ad'}}) 2026-01-10 14:25:01.971769 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a6c1f07-f96f-5f9c-9404-64a84774a9be'}}) 2026-01-10 14:25:01.971779 | orchestrator | 2026-01-10 14:25:01.971788 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:25:01.971849 | orchestrator | Saturday 10 January 2026 14:24:58 +0000 (0:00:00.202) 0:00:36.759 ****** 2026-01-10 14:25:01.971861 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'}) 2026-01-10 14:25:01.971873 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'}) 2026-01-10 14:25:01.971883 | orchestrator | 2026-01-10 14:25:01.971892 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:25:01.971902 | orchestrator | Saturday 10 January 2026 14:25:00 +0000 (0:00:01.924) 0:00:38.683 ****** 2026-01-10 14:25:01.971911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:01.971922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:01.971932 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:01.971941 | orchestrator | 2026-01-10 14:25:01.971951 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:25:01.971960 | orchestrator | Saturday 10 January 2026 14:25:00 +0000 (0:00:00.163) 0:00:38.847 ****** 2026-01-10 14:25:01.971970 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'}) 2026-01-10 14:25:01.971987 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'}) 2026-01-10 14:25:07.910298 | orchestrator | 2026-01-10 14:25:07.910396 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:25:07.910408 | orchestrator | Saturday 10 January 2026 14:25:01 +0000 (0:00:01.369) 0:00:40.216 ****** 2026-01-10 14:25:07.910417 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910435 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910444 | orchestrator | 2026-01-10 14:25:07.910452 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:25:07.910460 | orchestrator | Saturday 10 January 2026 14:25:02 +0000 (0:00:00.183) 0:00:40.400 ****** 2026-01-10 14:25:07.910469 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910477 | orchestrator | 2026-01-10 14:25:07.910485 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:25:07.910493 | orchestrator | Saturday 10 January 2026 14:25:02 +0000 (0:00:00.158) 0:00:40.558 ****** 2026-01-10 14:25:07.910501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910517 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910525 | orchestrator | 2026-01-10 14:25:07.910532 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:25:07.910540 | orchestrator | Saturday 10 January 2026 14:25:02 +0000 (0:00:00.211) 0:00:40.770 ****** 2026-01-10 14:25:07.910548 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910556 | orchestrator | 2026-01-10 14:25:07.910564 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:25:07.910572 | orchestrator | Saturday 10 January 2026 14:25:02 +0000 (0:00:00.158) 0:00:40.928 ****** 2026-01-10 14:25:07.910580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910595 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910603 | orchestrator | 2026-01-10 14:25:07.910611 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:25:07.910633 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.396) 0:00:41.324 ****** 2026-01-10 14:25:07.910642 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910650 | orchestrator | 2026-01-10 14:25:07.910658 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:25:07.910666 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.171) 0:00:41.495 ****** 2026-01-10 14:25:07.910674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910689 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910697 | orchestrator | 2026-01-10 14:25:07.910705 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:25:07.910713 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.194) 0:00:41.690 ****** 2026-01-10 14:25:07.910721 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:07.910748 | orchestrator | 2026-01-10 14:25:07.910757 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:25:07.910765 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.136) 0:00:41.827 ****** 2026-01-10 14:25:07.910773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910781 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910789 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910830 | orchestrator | 2026-01-10 14:25:07.910843 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:25:07.910858 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.144) 0:00:41.972 ****** 2026-01-10 14:25:07.910866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910881 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910889 | orchestrator | 2026-01-10 14:25:07.910897 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:25:07.910919 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:00.158) 0:00:42.131 ****** 2026-01-10 14:25:07.910927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:07.910935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:07.910943 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910951 | orchestrator | 2026-01-10 14:25:07.910959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:25:07.910967 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.185) 0:00:42.317 ****** 2026-01-10 14:25:07.910974 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.910982 | orchestrator | 2026-01-10 14:25:07.910990 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:25:07.910998 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.141) 0:00:42.458 ****** 2026-01-10 14:25:07.911006 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911013 | orchestrator | 2026-01-10 14:25:07.911021 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:25:07.911029 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.171) 0:00:42.630 ****** 2026-01-10 14:25:07.911037 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911045 | orchestrator | 2026-01-10 14:25:07.911052 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:25:07.911060 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.147) 0:00:42.777 ****** 2026-01-10 14:25:07.911068 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:07.911076 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:25:07.911084 | orchestrator | } 2026-01-10 14:25:07.911092 | orchestrator | 2026-01-10 14:25:07.911100 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:25:07.911108 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.145) 0:00:42.923 ****** 2026-01-10 14:25:07.911116 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:07.911124 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:25:07.911132 | orchestrator | } 2026-01-10 14:25:07.911140 | orchestrator | 2026-01-10 14:25:07.911148 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:25:07.911156 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:00.152) 0:00:43.075 ****** 2026-01-10 14:25:07.911170 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:07.911178 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:25:07.911186 | orchestrator | } 2026-01-10 14:25:07.911194 | orchestrator | 2026-01-10 14:25:07.911202 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:25:07.911210 | orchestrator | Saturday 10 January 2026 14:25:05 +0000 (0:00:00.350) 0:00:43.426 ****** 2026-01-10 14:25:07.911217 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:07.911225 | orchestrator | 2026-01-10 14:25:07.911233 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:25:07.911242 | orchestrator | Saturday 10 January 2026 14:25:05 +0000 (0:00:00.561) 0:00:43.988 ****** 2026-01-10 14:25:07.911249 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:07.911257 | orchestrator | 2026-01-10 14:25:07.911265 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:25:07.911273 | orchestrator | Saturday 10 January 2026 14:25:06 +0000 (0:00:00.517) 0:00:44.506 ****** 2026-01-10 14:25:07.911281 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:07.911288 | orchestrator | 2026-01-10 14:25:07.911296 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:25:07.911304 | orchestrator | Saturday 10 January 2026 14:25:06 +0000 (0:00:00.531) 0:00:45.037 ****** 2026-01-10 14:25:07.911312 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:07.911319 | orchestrator | 2026-01-10 14:25:07.911327 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:25:07.911335 | orchestrator | Saturday 10 January 2026 14:25:06 +0000 (0:00:00.143) 0:00:45.181 ****** 2026-01-10 14:25:07.911343 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911350 | orchestrator | 2026-01-10 14:25:07.911365 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:25:07.911373 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.138) 0:00:45.320 ****** 2026-01-10 14:25:07.911381 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911389 | orchestrator | 2026-01-10 14:25:07.911397 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:25:07.911405 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.113) 0:00:45.433 ****** 2026-01-10 14:25:07.911413 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:07.911421 | orchestrator |  "vgs_report": { 2026-01-10 14:25:07.911429 | orchestrator |  "vg": [] 2026-01-10 14:25:07.911437 | orchestrator |  } 2026-01-10 14:25:07.911445 | orchestrator | } 2026-01-10 14:25:07.911453 | orchestrator | 2026-01-10 14:25:07.911461 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:25:07.911469 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.152) 0:00:45.585 ****** 2026-01-10 14:25:07.911477 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911484 | orchestrator | 2026-01-10 14:25:07.911492 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:25:07.911500 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.140) 0:00:45.726 ****** 2026-01-10 14:25:07.911508 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911516 | orchestrator | 2026-01-10 14:25:07.911524 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:25:07.911532 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.150) 0:00:45.877 ****** 2026-01-10 14:25:07.911540 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911547 | orchestrator | 2026-01-10 14:25:07.911555 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:25:07.911563 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.139) 0:00:46.016 ****** 2026-01-10 14:25:07.911571 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:07.911579 | orchestrator | 2026-01-10 14:25:07.911592 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:25:12.768187 | orchestrator | Saturday 10 January 2026 14:25:07 +0000 (0:00:00.140) 0:00:46.157 ****** 2026-01-10 14:25:12.768322 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768337 | orchestrator | 2026-01-10 14:25:12.768349 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:25:12.768359 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.347) 0:00:46.504 ****** 2026-01-10 14:25:12.768369 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768379 | orchestrator | 2026-01-10 14:25:12.768389 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:25:12.768399 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.137) 0:00:46.642 ****** 2026-01-10 14:25:12.768408 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768418 | orchestrator | 2026-01-10 14:25:12.768428 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:25:12.768437 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.128) 0:00:46.770 ****** 2026-01-10 14:25:12.768447 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768457 | orchestrator | 2026-01-10 14:25:12.768466 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:25:12.768476 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.130) 0:00:46.901 ****** 2026-01-10 14:25:12.768485 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768495 | orchestrator | 2026-01-10 14:25:12.768505 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:25:12.768514 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.134) 0:00:47.035 ****** 2026-01-10 14:25:12.768524 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768534 | orchestrator | 2026-01-10 14:25:12.768544 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:25:12.768553 | orchestrator | Saturday 10 January 2026 14:25:08 +0000 (0:00:00.142) 0:00:47.178 ****** 2026-01-10 14:25:12.768563 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768573 | orchestrator | 2026-01-10 14:25:12.768582 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:25:12.768592 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.150) 0:00:47.329 ****** 2026-01-10 14:25:12.768602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768612 | orchestrator | 2026-01-10 14:25:12.768622 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:25:12.768690 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.164) 0:00:47.494 ****** 2026-01-10 14:25:12.768714 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768732 | orchestrator | 2026-01-10 14:25:12.768750 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:25:12.768767 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.151) 0:00:47.645 ****** 2026-01-10 14:25:12.768785 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768824 | orchestrator | 2026-01-10 14:25:12.768836 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:25:12.768862 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.158) 0:00:47.804 ****** 2026-01-10 14:25:12.768875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.768887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.768899 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.768911 | orchestrator | 2026-01-10 14:25:12.768924 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:25:12.768936 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.169) 0:00:47.974 ****** 2026-01-10 14:25:12.768949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.768971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.768985 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769003 | orchestrator | 2026-01-10 14:25:12.769021 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:25:12.769038 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:00.158) 0:00:48.132 ****** 2026-01-10 14:25:12.769056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769090 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769107 | orchestrator | 2026-01-10 14:25:12.769124 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:25:12.769142 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.371) 0:00:48.504 ****** 2026-01-10 14:25:12.769158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769192 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769207 | orchestrator | 2026-01-10 14:25:12.769247 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:25:12.769266 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.162) 0:00:48.666 ****** 2026-01-10 14:25:12.769285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769320 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769337 | orchestrator | 2026-01-10 14:25:12.769357 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:25:12.769377 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.159) 0:00:48.825 ****** 2026-01-10 14:25:12.769397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769435 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769455 | orchestrator | 2026-01-10 14:25:12.769473 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:25:12.769492 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.161) 0:00:48.987 ****** 2026-01-10 14:25:12.769510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769549 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769568 | orchestrator | 2026-01-10 14:25:12.769587 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:25:12.769605 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.153) 0:00:49.141 ****** 2026-01-10 14:25:12.769638 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.769670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.769689 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.769708 | orchestrator | 2026-01-10 14:25:12.769728 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:25:12.769748 | orchestrator | Saturday 10 January 2026 14:25:11 +0000 (0:00:00.142) 0:00:49.283 ****** 2026-01-10 14:25:12.769766 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:12.769784 | orchestrator | 2026-01-10 14:25:12.769823 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:25:12.769834 | orchestrator | Saturday 10 January 2026 14:25:11 +0000 (0:00:00.509) 0:00:49.792 ****** 2026-01-10 14:25:12.769849 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:12.769870 | orchestrator | 2026-01-10 14:25:12.769888 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:25:12.769907 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.547) 0:00:50.340 ****** 2026-01-10 14:25:12.769927 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:12.769946 | orchestrator | 2026-01-10 14:25:12.769967 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:25:12.769987 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.150) 0:00:50.491 ****** 2026-01-10 14:25:12.770007 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'vg_name': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'}) 2026-01-10 14:25:12.770111 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'vg_name': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'}) 2026-01-10 14:25:12.770132 | orchestrator | 2026-01-10 14:25:12.770152 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:25:12.770216 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.188) 0:00:50.679 ****** 2026-01-10 14:25:12.770237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.770255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:12.770274 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:12.770286 | orchestrator | 2026-01-10 14:25:12.770297 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:25:12.770308 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.176) 0:00:50.856 ****** 2026-01-10 14:25:12.770324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:12.770360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:19.146471 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:19.146554 | orchestrator | 2026-01-10 14:25:19.146561 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:25:19.146568 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.155) 0:00:51.011 ****** 2026-01-10 14:25:19.146573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'})  2026-01-10 14:25:19.146580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'})  2026-01-10 14:25:19.146585 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:19.146605 | orchestrator | 2026-01-10 14:25:19.146610 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:25:19.146615 | orchestrator | Saturday 10 January 2026 14:25:12 +0000 (0:00:00.185) 0:00:51.196 ****** 2026-01-10 14:25:19.146620 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:19.146625 | orchestrator |  "lvm_report": { 2026-01-10 14:25:19.146631 | orchestrator |  "lv": [ 2026-01-10 14:25:19.146636 | orchestrator |  { 2026-01-10 14:25:19.146641 | orchestrator |  "lv_name": "osd-block-381f50a6-56c2-5a32-835b-1a08246466ad", 2026-01-10 14:25:19.146647 | orchestrator |  "vg_name": "ceph-381f50a6-56c2-5a32-835b-1a08246466ad" 2026-01-10 14:25:19.146651 | orchestrator |  }, 2026-01-10 14:25:19.146656 | orchestrator |  { 2026-01-10 14:25:19.146661 | orchestrator |  "lv_name": "osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be", 2026-01-10 14:25:19.146665 | orchestrator |  "vg_name": "ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be" 2026-01-10 14:25:19.146670 | orchestrator |  } 2026-01-10 14:25:19.146674 | orchestrator |  ], 2026-01-10 14:25:19.146679 | orchestrator |  "pv": [ 2026-01-10 14:25:19.146683 | orchestrator |  { 2026-01-10 14:25:19.146688 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:25:19.146692 | orchestrator |  "vg_name": "ceph-381f50a6-56c2-5a32-835b-1a08246466ad" 2026-01-10 14:25:19.146697 | orchestrator |  }, 2026-01-10 14:25:19.146702 | orchestrator |  { 2026-01-10 14:25:19.146706 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:25:19.146711 | orchestrator |  "vg_name": "ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be" 2026-01-10 14:25:19.146715 | orchestrator |  } 2026-01-10 14:25:19.146720 | orchestrator |  ] 2026-01-10 14:25:19.146724 | orchestrator |  } 2026-01-10 14:25:19.146729 | orchestrator | } 2026-01-10 14:25:19.146734 | orchestrator | 2026-01-10 14:25:19.146739 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:25:19.146743 | orchestrator | 2026-01-10 14:25:19.146748 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:25:19.146752 | orchestrator | Saturday 10 January 2026 14:25:13 +0000 (0:00:00.655) 0:00:51.852 ****** 2026-01-10 14:25:19.146757 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:19.146762 | orchestrator | 2026-01-10 14:25:19.146767 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:25:19.146772 | orchestrator | Saturday 10 January 2026 14:25:13 +0000 (0:00:00.248) 0:00:52.101 ****** 2026-01-10 14:25:19.146776 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:19.146781 | orchestrator | 2026-01-10 14:25:19.146807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.146812 | orchestrator | Saturday 10 January 2026 14:25:14 +0000 (0:00:00.254) 0:00:52.355 ****** 2026-01-10 14:25:19.146817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:25:19.146822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:25:19.146826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:25:19.146831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:25:19.146836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:25:19.146840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:25:19.146844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:25:19.146849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:25:19.146853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:25:19.146862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:25:19.146866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:25:19.146871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:25:19.146875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:25:19.146880 | orchestrator | 2026-01-10 14:25:19.146887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.146892 | orchestrator | Saturday 10 January 2026 14:25:14 +0000 (0:00:00.410) 0:00:52.766 ****** 2026-01-10 14:25:19.146896 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.146901 | orchestrator | 2026-01-10 14:25:19.146905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.146910 | orchestrator | Saturday 10 January 2026 14:25:14 +0000 (0:00:00.204) 0:00:52.971 ****** 2026-01-10 14:25:19.146914 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.146919 | orchestrator | 2026-01-10 14:25:19.146924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.146938 | orchestrator | Saturday 10 January 2026 14:25:14 +0000 (0:00:00.200) 0:00:53.171 ****** 2026-01-10 14:25:19.146943 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.146947 | orchestrator | 2026-01-10 14:25:19.146952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.146956 | orchestrator | Saturday 10 January 2026 14:25:15 +0000 (0:00:00.205) 0:00:53.377 ****** 2026-01-10 14:25:19.146961 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.146965 | orchestrator | 2026-01-10 14:25:19.146983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147022 | orchestrator | Saturday 10 January 2026 14:25:15 +0000 (0:00:00.203) 0:00:53.581 ****** 2026-01-10 14:25:19.147027 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.147032 | orchestrator | 2026-01-10 14:25:19.147037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147042 | orchestrator | Saturday 10 January 2026 14:25:15 +0000 (0:00:00.616) 0:00:54.197 ****** 2026-01-10 14:25:19.147048 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.147053 | orchestrator | 2026-01-10 14:25:19.147058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147063 | orchestrator | Saturday 10 January 2026 14:25:16 +0000 (0:00:00.223) 0:00:54.420 ****** 2026-01-10 14:25:19.147068 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.147073 | orchestrator | 2026-01-10 14:25:19.147078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147083 | orchestrator | Saturday 10 January 2026 14:25:16 +0000 (0:00:00.239) 0:00:54.660 ****** 2026-01-10 14:25:19.147088 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:19.147093 | orchestrator | 2026-01-10 14:25:19.147099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147104 | orchestrator | Saturday 10 January 2026 14:25:16 +0000 (0:00:00.236) 0:00:54.897 ****** 2026-01-10 14:25:19.147110 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1) 2026-01-10 14:25:19.147116 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1) 2026-01-10 14:25:19.147122 | orchestrator | 2026-01-10 14:25:19.147127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147131 | orchestrator | Saturday 10 January 2026 14:25:17 +0000 (0:00:00.432) 0:00:55.329 ****** 2026-01-10 14:25:19.147136 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc) 2026-01-10 14:25:19.147140 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc) 2026-01-10 14:25:19.147145 | orchestrator | 2026-01-10 14:25:19.147153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147161 | orchestrator | Saturday 10 January 2026 14:25:17 +0000 (0:00:00.420) 0:00:55.750 ****** 2026-01-10 14:25:19.147165 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076) 2026-01-10 14:25:19.147170 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076) 2026-01-10 14:25:19.147174 | orchestrator | 2026-01-10 14:25:19.147179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147184 | orchestrator | Saturday 10 January 2026 14:25:17 +0000 (0:00:00.442) 0:00:56.193 ****** 2026-01-10 14:25:19.147188 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6) 2026-01-10 14:25:19.147193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6) 2026-01-10 14:25:19.147197 | orchestrator | 2026-01-10 14:25:19.147202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:19.147206 | orchestrator | Saturday 10 January 2026 14:25:18 +0000 (0:00:00.429) 0:00:56.622 ****** 2026-01-10 14:25:19.147211 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:25:19.147215 | orchestrator | 2026-01-10 14:25:19.147220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:19.147225 | orchestrator | Saturday 10 January 2026 14:25:18 +0000 (0:00:00.323) 0:00:56.946 ****** 2026-01-10 14:25:19.147229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:25:19.147234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:25:19.147238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:25:19.147242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:25:19.147247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:25:19.147251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:25:19.147256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:25:19.147261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:25:19.147265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:25:19.147270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:25:19.147274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:25:19.147282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:25:28.699291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:25:28.699399 | orchestrator | 2026-01-10 14:25:28.699421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699437 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.440) 0:00:57.386 ****** 2026-01-10 14:25:28.699450 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699465 | orchestrator | 2026-01-10 14:25:28.699480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699493 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.230) 0:00:57.617 ****** 2026-01-10 14:25:28.699508 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699522 | orchestrator | 2026-01-10 14:25:28.699537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699551 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.765) 0:00:58.383 ****** 2026-01-10 14:25:28.699596 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699611 | orchestrator | 2026-01-10 14:25:28.699625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699639 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.246) 0:00:58.629 ****** 2026-01-10 14:25:28.699653 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699666 | orchestrator | 2026-01-10 14:25:28.699680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699694 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.225) 0:00:58.854 ****** 2026-01-10 14:25:28.699708 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699720 | orchestrator | 2026-01-10 14:25:28.699728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699736 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.199) 0:00:59.054 ****** 2026-01-10 14:25:28.699744 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699752 | orchestrator | 2026-01-10 14:25:28.699760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699769 | orchestrator | Saturday 10 January 2026 14:25:21 +0000 (0:00:00.250) 0:00:59.304 ****** 2026-01-10 14:25:28.699776 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699819 | orchestrator | 2026-01-10 14:25:28.699831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699843 | orchestrator | Saturday 10 January 2026 14:25:21 +0000 (0:00:00.203) 0:00:59.508 ****** 2026-01-10 14:25:28.699861 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.699878 | orchestrator | 2026-01-10 14:25:28.699891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.699905 | orchestrator | Saturday 10 January 2026 14:25:21 +0000 (0:00:00.201) 0:00:59.710 ****** 2026-01-10 14:25:28.699935 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:25:28.699949 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:25:28.699963 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:25:28.699977 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:25:28.699990 | orchestrator | 2026-01-10 14:25:28.700003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.700017 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.648) 0:01:00.358 ****** 2026-01-10 14:25:28.700030 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700043 | orchestrator | 2026-01-10 14:25:28.700057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.700072 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.228) 0:01:00.587 ****** 2026-01-10 14:25:28.700086 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700101 | orchestrator | 2026-01-10 14:25:28.700115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.700128 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.211) 0:01:00.798 ****** 2026-01-10 14:25:28.700139 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700151 | orchestrator | 2026-01-10 14:25:28.700163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:28.700175 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.188) 0:01:00.987 ****** 2026-01-10 14:25:28.700187 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700199 | orchestrator | 2026-01-10 14:25:28.700211 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:25:28.700224 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.206) 0:01:01.193 ****** 2026-01-10 14:25:28.700237 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700250 | orchestrator | 2026-01-10 14:25:28.700264 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:25:28.700277 | orchestrator | Saturday 10 January 2026 14:25:23 +0000 (0:00:00.405) 0:01:01.599 ****** 2026-01-10 14:25:28.700290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}}) 2026-01-10 14:25:28.700310 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8e61bc65-6745-5d05-9905-13a4cfa0641e'}}) 2026-01-10 14:25:28.700318 | orchestrator | 2026-01-10 14:25:28.700326 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:25:28.700334 | orchestrator | Saturday 10 January 2026 14:25:23 +0000 (0:00:00.215) 0:01:01.814 ****** 2026-01-10 14:25:28.700343 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}) 2026-01-10 14:25:28.700352 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'}) 2026-01-10 14:25:28.700360 | orchestrator | 2026-01-10 14:25:28.700368 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:25:28.700395 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:01.918) 0:01:03.733 ****** 2026-01-10 14:25:28.700404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:28.700413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:28.700421 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700429 | orchestrator | 2026-01-10 14:25:28.700437 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:25:28.700445 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:00.182) 0:01:03.915 ****** 2026-01-10 14:25:28.700453 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}) 2026-01-10 14:25:28.700461 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'}) 2026-01-10 14:25:28.700469 | orchestrator | 2026-01-10 14:25:28.700477 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:25:28.700484 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:01.355) 0:01:05.271 ****** 2026-01-10 14:25:28.700492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:28.700500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:28.700508 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700516 | orchestrator | 2026-01-10 14:25:28.700523 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:25:28.700531 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.174) 0:01:05.445 ****** 2026-01-10 14:25:28.700539 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700547 | orchestrator | 2026-01-10 14:25:28.700555 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:25:28.700562 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.140) 0:01:05.585 ****** 2026-01-10 14:25:28.700577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:28.700586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:28.700593 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700601 | orchestrator | 2026-01-10 14:25:28.700609 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:25:28.700622 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.162) 0:01:05.748 ****** 2026-01-10 14:25:28.700630 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700637 | orchestrator | 2026-01-10 14:25:28.700645 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:25:28.700653 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.153) 0:01:05.902 ****** 2026-01-10 14:25:28.700661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:28.700669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:28.700677 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700684 | orchestrator | 2026-01-10 14:25:28.700692 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:25:28.700700 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.179) 0:01:06.081 ****** 2026-01-10 14:25:28.700708 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700715 | orchestrator | 2026-01-10 14:25:28.700723 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:25:28.700731 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.173) 0:01:06.254 ****** 2026-01-10 14:25:28.700739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:28.700747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:28.700754 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:28.700762 | orchestrator | 2026-01-10 14:25:28.700770 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:25:28.700803 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.176) 0:01:06.431 ****** 2026-01-10 14:25:28.700813 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:28.700821 | orchestrator | 2026-01-10 14:25:28.700829 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:25:28.700837 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.362) 0:01:06.794 ****** 2026-01-10 14:25:28.700851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:34.860460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:34.860536 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860543 | orchestrator | 2026-01-10 14:25:34.860549 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:25:34.860556 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.154) 0:01:06.948 ****** 2026-01-10 14:25:34.860560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:34.860565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:34.860569 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860573 | orchestrator | 2026-01-10 14:25:34.860578 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:25:34.860582 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.165) 0:01:07.114 ****** 2026-01-10 14:25:34.860586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:34.860590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:34.860610 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860614 | orchestrator | 2026-01-10 14:25:34.860619 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:25:34.860623 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.180) 0:01:07.294 ****** 2026-01-10 14:25:34.860627 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860631 | orchestrator | 2026-01-10 14:25:34.860635 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:25:34.860639 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.141) 0:01:07.436 ****** 2026-01-10 14:25:34.860643 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860647 | orchestrator | 2026-01-10 14:25:34.860651 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:25:34.860656 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.134) 0:01:07.571 ****** 2026-01-10 14:25:34.860660 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860664 | orchestrator | 2026-01-10 14:25:34.860669 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:25:34.860673 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.141) 0:01:07.712 ****** 2026-01-10 14:25:34.860677 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:25:34.860682 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:25:34.860686 | orchestrator | } 2026-01-10 14:25:34.860691 | orchestrator | 2026-01-10 14:25:34.860695 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:25:34.860699 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.158) 0:01:07.870 ****** 2026-01-10 14:25:34.860703 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:25:34.860708 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:25:34.860712 | orchestrator | } 2026-01-10 14:25:34.860716 | orchestrator | 2026-01-10 14:25:34.860720 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:25:34.860725 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.141) 0:01:08.012 ****** 2026-01-10 14:25:34.860729 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:25:34.860733 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:25:34.860737 | orchestrator | } 2026-01-10 14:25:34.860741 | orchestrator | 2026-01-10 14:25:34.860745 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:25:34.860750 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.142) 0:01:08.155 ****** 2026-01-10 14:25:34.860754 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:34.860758 | orchestrator | 2026-01-10 14:25:34.860762 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:25:34.860766 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.501) 0:01:08.657 ****** 2026-01-10 14:25:34.860770 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:34.860817 | orchestrator | 2026-01-10 14:25:34.860823 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:25:34.860827 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.527) 0:01:09.185 ****** 2026-01-10 14:25:34.860831 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:34.860835 | orchestrator | 2026-01-10 14:25:34.860839 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:25:34.860843 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.707) 0:01:09.892 ****** 2026-01-10 14:25:34.860847 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:34.860851 | orchestrator | 2026-01-10 14:25:34.860856 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:25:34.860860 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.152) 0:01:10.044 ****** 2026-01-10 14:25:34.860864 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860868 | orchestrator | 2026-01-10 14:25:34.860872 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:25:34.860880 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.097) 0:01:10.142 ****** 2026-01-10 14:25:34.860884 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860888 | orchestrator | 2026-01-10 14:25:34.860893 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:25:34.860920 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.112) 0:01:10.255 ****** 2026-01-10 14:25:34.860925 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:25:34.860929 | orchestrator |  "vgs_report": { 2026-01-10 14:25:34.860934 | orchestrator |  "vg": [] 2026-01-10 14:25:34.860950 | orchestrator |  } 2026-01-10 14:25:34.860954 | orchestrator | } 2026-01-10 14:25:34.860959 | orchestrator | 2026-01-10 14:25:34.860963 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:25:34.860967 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.151) 0:01:10.406 ****** 2026-01-10 14:25:34.860971 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860975 | orchestrator | 2026-01-10 14:25:34.860980 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:25:34.860984 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.160) 0:01:10.567 ****** 2026-01-10 14:25:34.860988 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.860992 | orchestrator | 2026-01-10 14:25:34.860996 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:25:34.861000 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.150) 0:01:10.718 ****** 2026-01-10 14:25:34.861005 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861009 | orchestrator | 2026-01-10 14:25:34.861013 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:25:34.861017 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.128) 0:01:10.846 ****** 2026-01-10 14:25:34.861021 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861026 | orchestrator | 2026-01-10 14:25:34.861031 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:25:34.861035 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.133) 0:01:10.979 ****** 2026-01-10 14:25:34.861040 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861045 | orchestrator | 2026-01-10 14:25:34.861049 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:25:34.861056 | orchestrator | Saturday 10 January 2026 14:25:32 +0000 (0:00:00.134) 0:01:11.114 ****** 2026-01-10 14:25:34.861064 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861071 | orchestrator | 2026-01-10 14:25:34.861077 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:25:34.861082 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.145) 0:01:11.260 ****** 2026-01-10 14:25:34.861087 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861091 | orchestrator | 2026-01-10 14:25:34.861096 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:25:34.861101 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.140) 0:01:11.400 ****** 2026-01-10 14:25:34.861105 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861110 | orchestrator | 2026-01-10 14:25:34.861114 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:25:34.861119 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.365) 0:01:11.766 ****** 2026-01-10 14:25:34.861123 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861128 | orchestrator | 2026-01-10 14:25:34.861135 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:25:34.861140 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.135) 0:01:11.901 ****** 2026-01-10 14:25:34.861145 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861149 | orchestrator | 2026-01-10 14:25:34.861154 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:25:34.861162 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.140) 0:01:12.042 ****** 2026-01-10 14:25:34.861167 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861172 | orchestrator | 2026-01-10 14:25:34.861176 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:25:34.861181 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.151) 0:01:12.194 ****** 2026-01-10 14:25:34.861186 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861190 | orchestrator | 2026-01-10 14:25:34.861195 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:25:34.861200 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.141) 0:01:12.336 ****** 2026-01-10 14:25:34.861204 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861209 | orchestrator | 2026-01-10 14:25:34.861214 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:25:34.861218 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.161) 0:01:12.497 ****** 2026-01-10 14:25:34.861223 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861227 | orchestrator | 2026-01-10 14:25:34.861232 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:25:34.861237 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.140) 0:01:12.638 ****** 2026-01-10 14:25:34.861242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:34.861246 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:34.861251 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861255 | orchestrator | 2026-01-10 14:25:34.861260 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:25:34.861265 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.155) 0:01:12.793 ****** 2026-01-10 14:25:34.861269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:34.861274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:34.861279 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:34.861284 | orchestrator | 2026-01-10 14:25:34.861288 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:25:34.861293 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.154) 0:01:12.947 ****** 2026-01-10 14:25:34.861301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806689 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806699 | orchestrator | 2026-01-10 14:25:37.806707 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:25:37.806715 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.163) 0:01:13.111 ****** 2026-01-10 14:25:37.806722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806747 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806753 | orchestrator | 2026-01-10 14:25:37.806760 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:25:37.806827 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.151) 0:01:13.263 ****** 2026-01-10 14:25:37.806836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806855 | orchestrator | 2026-01-10 14:25:37.806861 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:25:37.806867 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.161) 0:01:13.424 ****** 2026-01-10 14:25:37.806873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806897 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806903 | orchestrator | 2026-01-10 14:25:37.806909 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:25:37.806915 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.365) 0:01:13.790 ****** 2026-01-10 14:25:37.806921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806928 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806934 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806941 | orchestrator | 2026-01-10 14:25:37.806947 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:25:37.806953 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.154) 0:01:13.945 ****** 2026-01-10 14:25:37.806959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.806966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.806972 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.806978 | orchestrator | 2026-01-10 14:25:37.806984 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:25:37.806990 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.171) 0:01:14.116 ****** 2026-01-10 14:25:37.806997 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:37.807004 | orchestrator | 2026-01-10 14:25:37.807010 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:25:37.807016 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.508) 0:01:14.624 ****** 2026-01-10 14:25:37.807022 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:37.807028 | orchestrator | 2026-01-10 14:25:37.807034 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:25:37.807040 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.497) 0:01:15.122 ****** 2026-01-10 14:25:37.807046 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:37.807053 | orchestrator | 2026-01-10 14:25:37.807059 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:25:37.807065 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.142) 0:01:15.264 ****** 2026-01-10 14:25:37.807071 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'vg_name': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'}) 2026-01-10 14:25:37.807078 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'vg_name': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}) 2026-01-10 14:25:37.807090 | orchestrator | 2026-01-10 14:25:37.807096 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:25:37.807102 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.166) 0:01:15.431 ****** 2026-01-10 14:25:37.807122 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.807129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.807135 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.807141 | orchestrator | 2026-01-10 14:25:37.807147 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:25:37.807154 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.146) 0:01:15.578 ****** 2026-01-10 14:25:37.807160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.807167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.807173 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.807179 | orchestrator | 2026-01-10 14:25:37.807185 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:25:37.807192 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.151) 0:01:15.729 ****** 2026-01-10 14:25:37.807198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'})  2026-01-10 14:25:37.807204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'})  2026-01-10 14:25:37.807210 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:37.807216 | orchestrator | 2026-01-10 14:25:37.807225 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:25:37.807235 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.147) 0:01:15.877 ****** 2026-01-10 14:25:37.807246 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:25:37.807256 | orchestrator |  "lvm_report": { 2026-01-10 14:25:37.807266 | orchestrator |  "lv": [ 2026-01-10 14:25:37.807277 | orchestrator |  { 2026-01-10 14:25:37.807293 | orchestrator |  "lv_name": "osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e", 2026-01-10 14:25:37.807304 | orchestrator |  "vg_name": "ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e" 2026-01-10 14:25:37.807314 | orchestrator |  }, 2026-01-10 14:25:37.807321 | orchestrator |  { 2026-01-10 14:25:37.807327 | orchestrator |  "lv_name": "osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f", 2026-01-10 14:25:37.807334 | orchestrator |  "vg_name": "ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f" 2026-01-10 14:25:37.807340 | orchestrator |  } 2026-01-10 14:25:37.807346 | orchestrator |  ], 2026-01-10 14:25:37.807352 | orchestrator |  "pv": [ 2026-01-10 14:25:37.807358 | orchestrator |  { 2026-01-10 14:25:37.807364 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:25:37.807370 | orchestrator |  "vg_name": "ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f" 2026-01-10 14:25:37.807376 | orchestrator |  }, 2026-01-10 14:25:37.807381 | orchestrator |  { 2026-01-10 14:25:37.807387 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:25:37.807392 | orchestrator |  "vg_name": "ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e" 2026-01-10 14:25:37.807397 | orchestrator |  } 2026-01-10 14:25:37.807403 | orchestrator |  ] 2026-01-10 14:25:37.807413 | orchestrator |  } 2026-01-10 14:25:37.807419 | orchestrator | } 2026-01-10 14:25:37.807424 | orchestrator | 2026-01-10 14:25:37.807430 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:25:37.807435 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:25:37.807441 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:25:37.807446 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:25:37.807451 | orchestrator | 2026-01-10 14:25:37.807457 | orchestrator | 2026-01-10 14:25:37.807462 | orchestrator | 2026-01-10 14:25:37.807467 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:25:37.807473 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.153) 0:01:16.030 ****** 2026-01-10 14:25:37.807478 | orchestrator | =============================================================================== 2026-01-10 14:25:37.807483 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2026-01-10 14:25:37.807489 | orchestrator | Create block LVs -------------------------------------------------------- 4.17s 2026-01-10 14:25:37.807494 | orchestrator | Add known partitions to the list of available block devices ------------- 1.79s 2026-01-10 14:25:37.807499 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-01-10 14:25:37.807505 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-01-10 14:25:37.807510 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-01-10 14:25:37.807515 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2026-01-10 14:25:37.807521 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-01-10 14:25:37.807531 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2026-01-10 14:25:38.172537 | orchestrator | Print LVM report data --------------------------------------------------- 1.11s 2026-01-10 14:25:38.172629 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-01-10 14:25:38.172638 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-01-10 14:25:38.172644 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-01-10 14:25:38.172651 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.84s 2026-01-10 14:25:38.172657 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.77s 2026-01-10 14:25:38.172663 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-01-10 14:25:38.172669 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.74s 2026-01-10 14:25:38.172675 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-01-10 14:25:38.172682 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-01-10 14:25:38.172689 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.74s 2026-01-10 14:25:50.547719 | orchestrator | 2026-01-10 14:25:50 | INFO  | Task f3a92cdc-1c6c-4b49-8c0f-fb5d6abef871 (facts) was prepared for execution. 2026-01-10 14:25:50.547855 | orchestrator | 2026-01-10 14:25:50 | INFO  | It takes a moment until task f3a92cdc-1c6c-4b49-8c0f-fb5d6abef871 (facts) has been started and output is visible here. 2026-01-10 14:26:03.345966 | orchestrator | 2026-01-10 14:26:03.346177 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:26:03.346206 | orchestrator | 2026-01-10 14:26:03.346223 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:26:03.346241 | orchestrator | Saturday 10 January 2026 14:25:55 +0000 (0:00:00.279) 0:00:00.279 ****** 2026-01-10 14:26:03.346291 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:03.346312 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:26:03.346329 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:26:03.346346 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:26:03.346363 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:03.346379 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:03.346395 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:03.346408 | orchestrator | 2026-01-10 14:26:03.346423 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:26:03.346440 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:01.224) 0:00:01.503 ****** 2026-01-10 14:26:03.346453 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:03.346469 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:26:03.346483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:26:03.346498 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:26:03.346513 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:03.346527 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:03.346539 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:03.346552 | orchestrator | 2026-01-10 14:26:03.346565 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:26:03.346577 | orchestrator | 2026-01-10 14:26:03.346591 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:26:03.346605 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:01.105) 0:00:02.609 ****** 2026-01-10 14:26:03.346618 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:26:03.346631 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:26:03.346645 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:26:03.346659 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:03.346671 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:26:03.346684 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:26:03.346697 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:03.346709 | orchestrator | 2026-01-10 14:26:03.346723 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:26:03.346735 | orchestrator | 2026-01-10 14:26:03.346748 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:26:03.346761 | orchestrator | Saturday 10 January 2026 14:26:02 +0000 (0:00:04.993) 0:00:07.602 ****** 2026-01-10 14:26:03.346774 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:03.346786 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:26:03.346799 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:26:03.346810 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:26:03.346847 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:26:03.346861 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:26:03.346874 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:03.346886 | orchestrator | 2026-01-10 14:26:03.346899 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:26:03.346912 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.346927 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.346940 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.346952 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.346964 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.346976 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.347006 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:03.347019 | orchestrator | 2026-01-10 14:26:03.347031 | orchestrator | 2026-01-10 14:26:03.347044 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:26:03.347057 | orchestrator | Saturday 10 January 2026 14:26:02 +0000 (0:00:00.578) 0:00:08.181 ****** 2026-01-10 14:26:03.347070 | orchestrator | =============================================================================== 2026-01-10 14:26:03.347083 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.99s 2026-01-10 14:26:03.347096 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-01-10 14:26:03.347108 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-01-10 14:26:03.347120 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-01-10 14:26:15.820022 | orchestrator | 2026-01-10 14:26:15 | INFO  | Task ef826293-fa1f-4a36-8e8c-bd5f4abdf64d (frr) was prepared for execution. 2026-01-10 14:26:15.820154 | orchestrator | 2026-01-10 14:26:15 | INFO  | It takes a moment until task ef826293-fa1f-4a36-8e8c-bd5f4abdf64d (frr) has been started and output is visible here. 2026-01-10 14:26:40.813750 | orchestrator | 2026-01-10 14:26:40.813898 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-10 14:26:40.813930 | orchestrator | 2026-01-10 14:26:40.814002 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-10 14:26:40.814118 | orchestrator | Saturday 10 January 2026 14:26:20 +0000 (0:00:00.227) 0:00:00.227 ****** 2026-01-10 14:26:40.814143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:26:40.814163 | orchestrator | 2026-01-10 14:26:40.814183 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-10 14:26:40.814201 | orchestrator | Saturday 10 January 2026 14:26:20 +0000 (0:00:00.198) 0:00:00.426 ****** 2026-01-10 14:26:40.814222 | orchestrator | changed: [testbed-manager] 2026-01-10 14:26:40.814242 | orchestrator | 2026-01-10 14:26:40.814263 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-10 14:26:40.814294 | orchestrator | Saturday 10 January 2026 14:26:21 +0000 (0:00:01.041) 0:00:01.467 ****** 2026-01-10 14:26:40.814314 | orchestrator | changed: [testbed-manager] 2026-01-10 14:26:40.814333 | orchestrator | 2026-01-10 14:26:40.814351 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-10 14:26:40.814372 | orchestrator | Saturday 10 January 2026 14:26:30 +0000 (0:00:09.562) 0:00:11.030 ****** 2026-01-10 14:26:40.814392 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:40.814413 | orchestrator | 2026-01-10 14:26:40.814434 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-10 14:26:40.814454 | orchestrator | Saturday 10 January 2026 14:26:31 +0000 (0:00:01.025) 0:00:12.055 ****** 2026-01-10 14:26:40.814473 | orchestrator | changed: [testbed-manager] 2026-01-10 14:26:40.814492 | orchestrator | 2026-01-10 14:26:40.814510 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-10 14:26:40.814529 | orchestrator | Saturday 10 January 2026 14:26:32 +0000 (0:00:00.968) 0:00:13.024 ****** 2026-01-10 14:26:40.814548 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:40.814567 | orchestrator | 2026-01-10 14:26:40.814586 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-10 14:26:40.814606 | orchestrator | Saturday 10 January 2026 14:26:34 +0000 (0:00:01.163) 0:00:14.188 ****** 2026-01-10 14:26:40.814626 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:40.814645 | orchestrator | 2026-01-10 14:26:40.814663 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-10 14:26:40.814684 | orchestrator | Saturday 10 January 2026 14:26:34 +0000 (0:00:00.134) 0:00:14.322 ****** 2026-01-10 14:26:40.814733 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:26:40.814755 | orchestrator | 2026-01-10 14:26:40.814773 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-10 14:26:40.814793 | orchestrator | Saturday 10 January 2026 14:26:34 +0000 (0:00:00.146) 0:00:14.468 ****** 2026-01-10 14:26:40.814811 | orchestrator | changed: [testbed-manager] 2026-01-10 14:26:40.814831 | orchestrator | 2026-01-10 14:26:40.814849 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-10 14:26:40.814871 | orchestrator | Saturday 10 January 2026 14:26:35 +0000 (0:00:00.962) 0:00:15.431 ****** 2026-01-10 14:26:40.814918 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-10 14:26:40.814937 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-10 14:26:40.814988 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-10 14:26:40.815008 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-10 14:26:40.815027 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-10 14:26:40.815046 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-10 14:26:40.815063 | orchestrator | 2026-01-10 14:26:40.815074 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-10 14:26:40.815084 | orchestrator | Saturday 10 January 2026 14:26:37 +0000 (0:00:02.231) 0:00:17.662 ****** 2026-01-10 14:26:40.815095 | orchestrator | ok: [testbed-manager] 2026-01-10 14:26:40.815106 | orchestrator | 2026-01-10 14:26:40.815117 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-10 14:26:40.815128 | orchestrator | Saturday 10 January 2026 14:26:39 +0000 (0:00:01.566) 0:00:19.229 ****** 2026-01-10 14:26:40.815139 | orchestrator | changed: [testbed-manager] 2026-01-10 14:26:40.815150 | orchestrator | 2026-01-10 14:26:40.815160 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:26:40.815172 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:26:40.815183 | orchestrator | 2026-01-10 14:26:40.815194 | orchestrator | 2026-01-10 14:26:40.815204 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:26:40.815215 | orchestrator | Saturday 10 January 2026 14:26:40 +0000 (0:00:01.377) 0:00:20.607 ****** 2026-01-10 14:26:40.815226 | orchestrator | =============================================================================== 2026-01-10 14:26:40.815236 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.56s 2026-01-10 14:26:40.815247 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-01-10 14:26:40.815257 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.57s 2026-01-10 14:26:40.815268 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2026-01-10 14:26:40.815279 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2026-01-10 14:26:40.815311 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.04s 2026-01-10 14:26:40.815323 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-01-10 14:26:40.815333 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.97s 2026-01-10 14:26:40.815344 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.96s 2026-01-10 14:26:40.815354 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-01-10 14:26:40.815365 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-01-10 14:26:40.815375 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-01-10 14:26:41.127683 | orchestrator | 2026-01-10 14:26:41.130432 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 10 14:26:41 UTC 2026 2026-01-10 14:26:41.130490 | orchestrator | 2026-01-10 14:26:43.094490 | orchestrator | 2026-01-10 14:26:43 | INFO  | Collection nutshell is prepared for execution 2026-01-10 14:26:43.094635 | orchestrator | 2026-01-10 14:26:43 | INFO  | A [0] - dotfiles 2026-01-10 14:26:53.171651 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - homer 2026-01-10 14:26:53.171739 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - netdata 2026-01-10 14:26:53.171748 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - openstackclient 2026-01-10 14:26:53.171756 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - phpmyadmin 2026-01-10 14:26:53.171763 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - common 2026-01-10 14:26:53.172377 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- loadbalancer 2026-01-10 14:26:53.172575 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [2] --- opensearch 2026-01-10 14:26:53.173053 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [2] --- mariadb-ng 2026-01-10 14:26:53.173225 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [3] ---- horizon 2026-01-10 14:26:53.173471 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [3] ---- keystone 2026-01-10 14:26:53.173624 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- neutron 2026-01-10 14:26:53.173942 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ wait-for-nova 2026-01-10 14:26:53.174177 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [6] ------- octavia 2026-01-10 14:26:53.175705 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- barbican 2026-01-10 14:26:53.175749 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- designate 2026-01-10 14:26:53.175767 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- ironic 2026-01-10 14:26:53.175779 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- placement 2026-01-10 14:26:53.176118 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- magnum 2026-01-10 14:26:53.176777 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- openvswitch 2026-01-10 14:26:53.176805 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [2] --- ovn 2026-01-10 14:26:53.177291 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- memcached 2026-01-10 14:26:53.177581 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- redis 2026-01-10 14:26:53.177603 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- rabbitmq-ng 2026-01-10 14:26:53.178103 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - kubernetes 2026-01-10 14:26:53.180426 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- kubeconfig 2026-01-10 14:26:53.180496 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- copy-kubeconfig 2026-01-10 14:26:53.180808 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [0] - ceph 2026-01-10 14:26:53.183132 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [1] -- ceph-pools 2026-01-10 14:26:53.183163 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [2] --- copy-ceph-keys 2026-01-10 14:26:53.183175 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [3] ---- cephclient 2026-01-10 14:26:53.183187 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-10 14:26:53.183198 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- wait-for-keystone 2026-01-10 14:26:53.183667 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-10 14:26:53.183690 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ glance 2026-01-10 14:26:53.183731 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ cinder 2026-01-10 14:26:53.183743 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ nova 2026-01-10 14:26:53.184383 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [4] ----- prometheus 2026-01-10 14:26:53.184411 | orchestrator | 2026-01-10 14:26:53 | INFO  | A [5] ------ grafana 2026-01-10 14:26:53.369786 | orchestrator | 2026-01-10 14:26:53 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-10 14:26:53.369895 | orchestrator | 2026-01-10 14:26:53 | INFO  | Tasks are running in the background 2026-01-10 14:26:56.439837 | orchestrator | 2026-01-10 14:26:56 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-10 14:26:58.544062 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:26:58.544176 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:26:58.544526 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:26:58.545141 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:26:58.546464 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:26:58.546934 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:26:58.547452 | orchestrator | 2026-01-10 14:26:58 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:26:58.547654 | orchestrator | 2026-01-10 14:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:01.604448 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:01.607615 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:01.609767 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:01.610549 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:01.610929 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:01.611446 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:01.612011 | orchestrator | 2026-01-10 14:27:01 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:01.612101 | orchestrator | 2026-01-10 14:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:04.645654 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:04.646867 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:04.649641 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:04.650160 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:04.652628 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:04.653014 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:04.660362 | orchestrator | 2026-01-10 14:27:04 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:04.660425 | orchestrator | 2026-01-10 14:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:07.690706 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:07.690803 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:07.690817 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:07.690836 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:07.691679 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:07.692232 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:07.694649 | orchestrator | 2026-01-10 14:27:07 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:07.694685 | orchestrator | 2026-01-10 14:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:10.793736 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:10.793831 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:10.793844 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:10.793852 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:10.793859 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:10.793867 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:10.793874 | orchestrator | 2026-01-10 14:27:10 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:10.793882 | orchestrator | 2026-01-10 14:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:13.781529 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:13.781615 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:13.781625 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:13.781632 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:13.782008 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:13.782775 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:13.783074 | orchestrator | 2026-01-10 14:27:13 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:13.783090 | orchestrator | 2026-01-10 14:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:16.973421 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:16.973736 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state STARTED 2026-01-10 14:27:16.973766 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:16.973807 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:16.973819 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:16.973830 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:16.973855 | orchestrator | 2026-01-10 14:27:16 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:16.973870 | orchestrator | 2026-01-10 14:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:20.064501 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:20.065740 | orchestrator | 2026-01-10 14:27:20.065835 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-10 14:27:20.065849 | orchestrator | 2026-01-10 14:27:20.065860 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-10 14:27:20.065870 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.447) 0:00:00.447 ****** 2026-01-10 14:27:20.065918 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:27:20.065938 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:27:20.065954 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:27:20.065973 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:27:20.065990 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:27:20.066117 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:27:20.066139 | orchestrator | changed: [testbed-manager] 2026-01-10 14:27:20.066157 | orchestrator | 2026-01-10 14:27:20.066174 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-10 14:27:20.066193 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:03.525) 0:00:03.972 ****** 2026-01-10 14:27:20.066204 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:27:20.066215 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:27:20.066251 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:27:20.066262 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:27:20.066293 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:27:20.066306 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:27:20.066317 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:27:20.066329 | orchestrator | 2026-01-10 14:27:20.066340 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-10 14:27:20.066352 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:01.889) 0:00:05.862 ****** 2026-01-10 14:27:20.066368 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.174164', 'end': '2026-01-10 14:27:10.177774', 'delta': '0:00:00.003610', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066436 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.193048', 'end': '2026-01-10 14:27:10.202852', 'delta': '0:00:00.009804', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066484 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.225942', 'end': '2026-01-10 14:27:10.235095', 'delta': '0:00:00.009153', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066559 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.271508', 'end': '2026-01-10 14:27:10.279997', 'delta': '0:00:00.008489', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066572 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.395101', 'end': '2026-01-10 14:27:10.403988', 'delta': '0:00:00.008887', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066963 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.697925', 'end': '2026-01-10 14:27:10.705212', 'delta': '0:00:00.007287', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.066978 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:27:10.886969', 'end': '2026-01-10 14:27:10.892248', 'delta': '0:00:00.005279', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:27:20.067003 | orchestrator | 2026-01-10 14:27:20.067014 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-10 14:27:20.067024 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:02.270) 0:00:08.133 ****** 2026-01-10 14:27:20.067058 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:27:20.067068 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:27:20.067096 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:27:20.067106 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:27:20.067116 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:27:20.067146 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:27:20.067156 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:27:20.067165 | orchestrator | 2026-01-10 14:27:20.067180 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-10 14:27:20.067217 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:02.539) 0:00:10.672 ****** 2026-01-10 14:27:20.067227 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:27:20.067237 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:27:20.067247 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:27:20.067257 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:27:20.067266 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:27:20.067276 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:27:20.067286 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:27:20.067295 | orchestrator | 2026-01-10 14:27:20.067327 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:27:20.067360 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067413 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067433 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067452 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067470 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067487 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067505 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:27:20.067555 | orchestrator | 2026-01-10 14:27:20.067574 | orchestrator | 2026-01-10 14:27:20.067591 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:27:20.067608 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:03.135) 0:00:13.807 ****** 2026-01-10 14:27:20.067653 | orchestrator | =============================================================================== 2026-01-10 14:27:20.067673 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.53s 2026-01-10 14:27:20.067683 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.14s 2026-01-10 14:27:20.067714 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.54s 2026-01-10 14:27:20.067744 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.27s 2026-01-10 14:27:20.067755 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.89s 2026-01-10 14:27:20.067765 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task e319a501-316f-464b-ab7e-8ca509ca97a2 is in state SUCCESS 2026-01-10 14:27:20.072581 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:20.073546 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:20.073884 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:20.078718 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:20.080292 | orchestrator | 2026-01-10 14:27:20 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:20.080350 | orchestrator | 2026-01-10 14:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:23.194479 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:23.194616 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:23.194636 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:23.194648 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:23.194659 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:23.194671 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:23.194682 | orchestrator | 2026-01-10 14:27:23 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:23.194710 | orchestrator | 2026-01-10 14:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:26.222835 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:26.223466 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:26.225673 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:26.228582 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:26.229305 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:26.231413 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:26.232077 | orchestrator | 2026-01-10 14:27:26 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:26.233402 | orchestrator | 2026-01-10 14:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:29.336683 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:29.336812 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:29.336826 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:29.336835 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:29.336844 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:29.336853 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:29.336861 | orchestrator | 2026-01-10 14:27:29 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:29.336870 | orchestrator | 2026-01-10 14:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:32.325464 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:32.328543 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:32.329903 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:32.331037 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:32.336610 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:32.336664 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:32.336888 | orchestrator | 2026-01-10 14:27:32 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:32.336898 | orchestrator | 2026-01-10 14:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:35.439812 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:35.439884 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:35.439890 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:35.439895 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:35.439899 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:35.439903 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:35.439907 | orchestrator | 2026-01-10 14:27:35 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:35.439911 | orchestrator | 2026-01-10 14:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:38.509450 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:38.511676 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:38.519162 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:38.521909 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:38.522800 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:38.523476 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:38.524392 | orchestrator | 2026-01-10 14:27:38 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:38.524422 | orchestrator | 2026-01-10 14:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:41.661985 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state STARTED 2026-01-10 14:27:41.662243 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:41.662263 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:41.662274 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:41.662284 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:41.662293 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:41.662302 | orchestrator | 2026-01-10 14:27:41 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:41.662311 | orchestrator | 2026-01-10 14:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:44.632041 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task f25fbed3-0438-4627-8f81-fdb8059e70a8 is in state SUCCESS 2026-01-10 14:27:44.632138 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:44.632187 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:44.632196 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:44.632204 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state STARTED 2026-01-10 14:27:44.632213 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:44.632222 | orchestrator | 2026-01-10 14:27:44 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:44.632331 | orchestrator | 2026-01-10 14:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:47.686972 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:47.687052 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:47.687704 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:47.688285 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task 87b95b60-2e4a-4328-80a0-43750264fbe2 is in state SUCCESS 2026-01-10 14:27:47.690771 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:47.691429 | orchestrator | 2026-01-10 14:27:47 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:47.691595 | orchestrator | 2026-01-10 14:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:50.774317 | orchestrator | 2026-01-10 14:27:50 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:50.774453 | orchestrator | 2026-01-10 14:27:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:50.775347 | orchestrator | 2026-01-10 14:27:50 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:50.777565 | orchestrator | 2026-01-10 14:27:50 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:50.779238 | orchestrator | 2026-01-10 14:27:50 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:50.779385 | orchestrator | 2026-01-10 14:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:53.854130 | orchestrator | 2026-01-10 14:27:53 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:53.858142 | orchestrator | 2026-01-10 14:27:53 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:53.858254 | orchestrator | 2026-01-10 14:27:53 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:53.858551 | orchestrator | 2026-01-10 14:27:53 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:53.885963 | orchestrator | 2026-01-10 14:27:53 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:53.886134 | orchestrator | 2026-01-10 14:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:27:56.934484 | orchestrator | 2026-01-10 14:27:56 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:27:56.934575 | orchestrator | 2026-01-10 14:27:56 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:27:56.936650 | orchestrator | 2026-01-10 14:27:56 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:27:56.939043 | orchestrator | 2026-01-10 14:27:56 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:27:56.942847 | orchestrator | 2026-01-10 14:27:56 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:27:56.942914 | orchestrator | 2026-01-10 14:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:00.104924 | orchestrator | 2026-01-10 14:28:00 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:00.104999 | orchestrator | 2026-01-10 14:28:00 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:00.106865 | orchestrator | 2026-01-10 14:28:00 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:00.110616 | orchestrator | 2026-01-10 14:28:00 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:00.110672 | orchestrator | 2026-01-10 14:28:00 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:00.110679 | orchestrator | 2026-01-10 14:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:03.203695 | orchestrator | 2026-01-10 14:28:03 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:03.204153 | orchestrator | 2026-01-10 14:28:03 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:03.204864 | orchestrator | 2026-01-10 14:28:03 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:03.206052 | orchestrator | 2026-01-10 14:28:03 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:03.208622 | orchestrator | 2026-01-10 14:28:03 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:03.208668 | orchestrator | 2026-01-10 14:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:06.268910 | orchestrator | 2026-01-10 14:28:06 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:06.277536 | orchestrator | 2026-01-10 14:28:06 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:06.279928 | orchestrator | 2026-01-10 14:28:06 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:06.284716 | orchestrator | 2026-01-10 14:28:06 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:06.284800 | orchestrator | 2026-01-10 14:28:06 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:06.284815 | orchestrator | 2026-01-10 14:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:09.486538 | orchestrator | 2026-01-10 14:28:09 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:09.490478 | orchestrator | 2026-01-10 14:28:09 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:09.491520 | orchestrator | 2026-01-10 14:28:09 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:09.492445 | orchestrator | 2026-01-10 14:28:09 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:09.494094 | orchestrator | 2026-01-10 14:28:09 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:09.494140 | orchestrator | 2026-01-10 14:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:12.546614 | orchestrator | 2026-01-10 14:28:12 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:12.547743 | orchestrator | 2026-01-10 14:28:12 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:12.550181 | orchestrator | 2026-01-10 14:28:12 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:12.551405 | orchestrator | 2026-01-10 14:28:12 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:12.553413 | orchestrator | 2026-01-10 14:28:12 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:12.554156 | orchestrator | 2026-01-10 14:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:15.634367 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:15.637903 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:15.638977 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:15.641059 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:15.642147 | orchestrator | 2026-01-10 14:28:15 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:15.642218 | orchestrator | 2026-01-10 14:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:18.688781 | orchestrator | 2026-01-10 14:28:18 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:18.688965 | orchestrator | 2026-01-10 14:28:18 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:18.691763 | orchestrator | 2026-01-10 14:28:18 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:18.692825 | orchestrator | 2026-01-10 14:28:18 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:18.697767 | orchestrator | 2026-01-10 14:28:18 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:18.697860 | orchestrator | 2026-01-10 14:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:21.751074 | orchestrator | 2026-01-10 14:28:21 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:21.751216 | orchestrator | 2026-01-10 14:28:21 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:21.751318 | orchestrator | 2026-01-10 14:28:21 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:21.751333 | orchestrator | 2026-01-10 14:28:21 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:21.751344 | orchestrator | 2026-01-10 14:28:21 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:21.751356 | orchestrator | 2026-01-10 14:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:24.798406 | orchestrator | 2026-01-10 14:28:24 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state STARTED 2026-01-10 14:28:24.798504 | orchestrator | 2026-01-10 14:28:24 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:24.799109 | orchestrator | 2026-01-10 14:28:24 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:24.800051 | orchestrator | 2026-01-10 14:28:24 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:24.802118 | orchestrator | 2026-01-10 14:28:24 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:24.802147 | orchestrator | 2026-01-10 14:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:27.851588 | orchestrator | 2026-01-10 14:28:27 | INFO  | Task da23e011-f8f0-495b-be1e-c746b32d6fd5 is in state SUCCESS 2026-01-10 14:28:27.852921 | orchestrator | 2026-01-10 14:28:27.852974 | orchestrator | 2026-01-10 14:28:27.852987 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-10 14:28:27.853000 | orchestrator | 2026-01-10 14:28:27.853011 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-10 14:28:27.853029 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.634) 0:00:00.634 ****** 2026-01-10 14:28:27.853056 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:28:27.853079 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-10 14:28:27.853100 | orchestrator | } 2026-01-10 14:28:27.853118 | orchestrator | 2026-01-10 14:28:27.853136 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-10 14:28:27.853154 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.344) 0:00:00.978 ****** 2026-01-10 14:28:27.853173 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.853192 | orchestrator | 2026-01-10 14:28:27.853210 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-10 14:28:27.853227 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:01.781) 0:00:02.759 ****** 2026-01-10 14:28:27.853244 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-10 14:28:27.853320 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-10 14:28:27.853379 | orchestrator | 2026-01-10 14:28:27.853423 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-10 14:28:27.853445 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:01.566) 0:00:04.326 ****** 2026-01-10 14:28:27.853458 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.853469 | orchestrator | 2026-01-10 14:28:27.853480 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-10 14:28:27.853492 | orchestrator | Saturday 10 January 2026 14:27:12 +0000 (0:00:03.291) 0:00:07.617 ****** 2026-01-10 14:28:27.853564 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.853577 | orchestrator | 2026-01-10 14:28:27.853593 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-10 14:28:27.853611 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.962) 0:00:08.580 ****** 2026-01-10 14:28:27.853630 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-10 14:28:27.853648 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.853666 | orchestrator | 2026-01-10 14:28:27.853685 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-10 14:28:27.853703 | orchestrator | Saturday 10 January 2026 14:27:38 +0000 (0:00:25.381) 0:00:33.962 ****** 2026-01-10 14:28:27.853722 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.853740 | orchestrator | 2026-01-10 14:28:27.853756 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:28:27.853768 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.853781 | orchestrator | 2026-01-10 14:28:27.853791 | orchestrator | 2026-01-10 14:28:27.853802 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:28:27.853833 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:03.757) 0:00:37.719 ****** 2026-01-10 14:28:27.853844 | orchestrator | =============================================================================== 2026-01-10 14:28:27.853855 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.38s 2026-01-10 14:28:27.853866 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.76s 2026-01-10 14:28:27.853876 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.29s 2026-01-10 14:28:27.853887 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.78s 2026-01-10 14:28:27.853897 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.57s 2026-01-10 14:28:27.853908 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 0.96s 2026-01-10 14:28:27.853919 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2026-01-10 14:28:27.853929 | orchestrator | 2026-01-10 14:28:27.853940 | orchestrator | 2026-01-10 14:28:27.853952 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-10 14:28:27.853962 | orchestrator | 2026-01-10 14:28:27.853973 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-10 14:28:27.853984 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.336) 0:00:00.336 ****** 2026-01-10 14:28:27.853995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-10 14:28:27.854007 | orchestrator | 2026-01-10 14:28:27.854089 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-10 14:28:27.854104 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.275) 0:00:00.611 ****** 2026-01-10 14:28:27.854114 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-10 14:28:27.854125 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-10 14:28:27.854136 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-10 14:28:27.854147 | orchestrator | 2026-01-10 14:28:27.854158 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-10 14:28:27.854169 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:02.139) 0:00:02.750 ****** 2026-01-10 14:28:27.854179 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.854190 | orchestrator | 2026-01-10 14:28:27.854201 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-10 14:28:27.854211 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:02.573) 0:00:05.324 ****** 2026-01-10 14:28:27.854253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-10 14:28:27.854294 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.854305 | orchestrator | 2026-01-10 14:28:27.854316 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-10 14:28:27.854327 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:32.122) 0:00:37.446 ****** 2026-01-10 14:28:27.854338 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.854349 | orchestrator | 2026-01-10 14:28:27.854360 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-10 14:28:27.854370 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:00.828) 0:00:38.274 ****** 2026-01-10 14:28:27.854381 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.854392 | orchestrator | 2026-01-10 14:28:27.854410 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-10 14:28:27.854421 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:00.490) 0:00:38.765 ****** 2026-01-10 14:28:27.854432 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.854442 | orchestrator | 2026-01-10 14:28:27.854453 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-10 14:28:27.854464 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:01.297) 0:00:40.062 ****** 2026-01-10 14:28:27.854474 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.854485 | orchestrator | 2026-01-10 14:28:27.854496 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-10 14:28:27.854506 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.676) 0:00:40.739 ****** 2026-01-10 14:28:27.854517 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.854527 | orchestrator | 2026-01-10 14:28:27.854538 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-10 14:28:27.854549 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.870) 0:00:41.609 ****** 2026-01-10 14:28:27.854560 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.854570 | orchestrator | 2026-01-10 14:28:27.854583 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:28:27.854602 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.854620 | orchestrator | 2026-01-10 14:28:27.854637 | orchestrator | 2026-01-10 14:28:27.854654 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:28:27.854674 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.326) 0:00:41.936 ****** 2026-01-10 14:28:27.854693 | orchestrator | =============================================================================== 2026-01-10 14:28:27.854710 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.12s 2026-01-10 14:28:27.854728 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.57s 2026-01-10 14:28:27.854739 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.14s 2026-01-10 14:28:27.854750 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.30s 2026-01-10 14:28:27.854761 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.87s 2026-01-10 14:28:27.854772 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.83s 2026-01-10 14:28:27.854782 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.68s 2026-01-10 14:28:27.854793 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.49s 2026-01-10 14:28:27.854803 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.33s 2026-01-10 14:28:27.854814 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.28s 2026-01-10 14:28:27.854825 | orchestrator | 2026-01-10 14:28:27.854836 | orchestrator | 2026-01-10 14:28:27.854846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:28:27.854866 | orchestrator | 2026-01-10 14:28:27.854877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:28:27.854888 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.592) 0:00:00.592 ****** 2026-01-10 14:28:27.854898 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-10 14:28:27.854909 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-10 14:28:27.854920 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-10 14:28:27.854930 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-10 14:28:27.854941 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-10 14:28:27.854952 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-10 14:28:27.854962 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-10 14:28:27.854973 | orchestrator | 2026-01-10 14:28:27.854983 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-10 14:28:27.854994 | orchestrator | 2026-01-10 14:28:27.855005 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-10 14:28:27.855016 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:01.276) 0:00:01.868 ****** 2026-01-10 14:28:27.855040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:28:27.855060 | orchestrator | 2026-01-10 14:28:27.855071 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-10 14:28:27.855081 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:01.462) 0:00:03.331 ****** 2026-01-10 14:28:27.855092 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:27.855103 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:27.855114 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:27.855125 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.855136 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.855155 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:27.855167 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:27.855177 | orchestrator | 2026-01-10 14:28:27.855188 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-10 14:28:27.855199 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:02.301) 0:00:05.633 ****** 2026-01-10 14:28:27.855210 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.855221 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:27.855232 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:27.855242 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:27.855253 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.855329 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:27.855342 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:27.855353 | orchestrator | 2026-01-10 14:28:27.855363 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-10 14:28:27.855380 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:03.450) 0:00:09.083 ****** 2026-01-10 14:28:27.855391 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:28:27.855402 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.855413 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:28:27.855424 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:28:27.855435 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:28:27.855446 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:28:27.855456 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:28:27.855467 | orchestrator | 2026-01-10 14:28:27.855478 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-10 14:28:27.855489 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:01.959) 0:00:11.043 ****** 2026-01-10 14:28:27.855499 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:28:27.855510 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:28:27.855521 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:28:27.855539 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:28:27.855550 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:28:27.855560 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:28:27.855571 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.855582 | orchestrator | 2026-01-10 14:28:27.855593 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-10 14:28:27.855604 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:10.775) 0:00:21.819 ****** 2026-01-10 14:28:27.855615 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:28:27.855625 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:28:27.855636 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:28:27.855647 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:28:27.855657 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:28:27.855668 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:28:27.855679 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.855690 | orchestrator | 2026-01-10 14:28:27.855701 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-10 14:28:27.855711 | orchestrator | Saturday 10 January 2026 14:28:02 +0000 (0:00:35.410) 0:00:57.229 ****** 2026-01-10 14:28:27.855723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:28:27.855736 | orchestrator | 2026-01-10 14:28:27.855747 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-10 14:28:27.855758 | orchestrator | Saturday 10 January 2026 14:28:04 +0000 (0:00:01.779) 0:00:59.009 ****** 2026-01-10 14:28:27.855769 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-10 14:28:27.855780 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-10 14:28:27.855791 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-10 14:28:27.855802 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-10 14:28:27.855813 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-10 14:28:27.855823 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-10 14:28:27.855834 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-10 14:28:27.855845 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-10 14:28:27.855855 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-10 14:28:27.855866 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-10 14:28:27.855877 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-10 14:28:27.855888 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-10 14:28:27.855898 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-10 14:28:27.855909 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-10 14:28:27.855920 | orchestrator | 2026-01-10 14:28:27.855931 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-10 14:28:27.855942 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:04.800) 0:01:03.810 ****** 2026-01-10 14:28:27.855953 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.855964 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:27.855974 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:27.855985 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:27.855996 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.856006 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:27.856017 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:27.856028 | orchestrator | 2026-01-10 14:28:27.856038 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-10 14:28:27.856049 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:01.225) 0:01:05.035 ****** 2026-01-10 14:28:27.856060 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.856070 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:28:27.856081 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:28:27.856099 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:28:27.856109 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:28:27.856120 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:28:27.856131 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:28:27.856144 | orchestrator | 2026-01-10 14:28:27.856163 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-10 14:28:27.856200 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:01.809) 0:01:06.845 ****** 2026-01-10 14:28:27.856225 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:27.856242 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:27.856284 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.856304 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:27.856320 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.856338 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:27.856356 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:27.856373 | orchestrator | 2026-01-10 14:28:27.856392 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-10 14:28:27.856410 | orchestrator | Saturday 10 January 2026 14:28:14 +0000 (0:00:01.986) 0:01:08.832 ****** 2026-01-10 14:28:27.856429 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:27.856441 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:27.856452 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:27.856462 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:27.856480 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:27.856491 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:27.856501 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:27.856512 | orchestrator | 2026-01-10 14:28:27.856523 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-10 14:28:27.856536 | orchestrator | Saturday 10 January 2026 14:28:16 +0000 (0:00:02.710) 0:01:11.542 ****** 2026-01-10 14:28:27.856553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-10 14:28:27.856571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:28:27.856583 | orchestrator | 2026-01-10 14:28:27.856594 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-10 14:28:27.856605 | orchestrator | Saturday 10 January 2026 14:28:18 +0000 (0:00:02.045) 0:01:13.588 ****** 2026-01-10 14:28:27.856616 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.856627 | orchestrator | 2026-01-10 14:28:27.856637 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-10 14:28:27.856652 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:02.595) 0:01:16.183 ****** 2026-01-10 14:28:27.856668 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:28:27.856685 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:28:27.856700 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:28:27.856711 | orchestrator | changed: [testbed-manager] 2026-01-10 14:28:27.856722 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:28:27.856732 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:28:27.856743 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:28:27.856754 | orchestrator | 2026-01-10 14:28:27.856764 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:28:27.856775 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856786 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856797 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856808 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856835 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856846 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856857 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:28:27.856868 | orchestrator | 2026-01-10 14:28:27.856879 | orchestrator | 2026-01-10 14:28:27.856890 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:28:27.856901 | orchestrator | Saturday 10 January 2026 14:28:25 +0000 (0:00:03.719) 0:01:19.903 ****** 2026-01-10 14:28:27.856913 | orchestrator | =============================================================================== 2026-01-10 14:28:27.856923 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 35.41s 2026-01-10 14:28:27.856934 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.78s 2026-01-10 14:28:27.856945 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.80s 2026-01-10 14:28:27.856955 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.72s 2026-01-10 14:28:27.856966 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2026-01-10 14:28:27.856976 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.71s 2026-01-10 14:28:27.856987 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.60s 2026-01-10 14:28:27.856998 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.30s 2026-01-10 14:28:27.857009 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.05s 2026-01-10 14:28:27.857019 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.99s 2026-01-10 14:28:27.857030 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.96s 2026-01-10 14:28:27.857049 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.81s 2026-01-10 14:28:27.857061 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.78s 2026-01-10 14:28:27.857072 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.46s 2026-01-10 14:28:27.857083 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2026-01-10 14:28:27.857093 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.23s 2026-01-10 14:28:27.857105 | orchestrator | 2026-01-10 14:28:27 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:27.857254 | orchestrator | 2026-01-10 14:28:27 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:27.858000 | orchestrator | 2026-01-10 14:28:27 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:27.859531 | orchestrator | 2026-01-10 14:28:27 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:27.859786 | orchestrator | 2026-01-10 14:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:30.925391 | orchestrator | 2026-01-10 14:28:30 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:30.929532 | orchestrator | 2026-01-10 14:28:30 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:30.946391 | orchestrator | 2026-01-10 14:28:30 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:30.946472 | orchestrator | 2026-01-10 14:28:30 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:30.946503 | orchestrator | 2026-01-10 14:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:33.976523 | orchestrator | 2026-01-10 14:28:33 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:33.977945 | orchestrator | 2026-01-10 14:28:33 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:33.979089 | orchestrator | 2026-01-10 14:28:33 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:33.980915 | orchestrator | 2026-01-10 14:28:33 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:33.981044 | orchestrator | 2026-01-10 14:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:37.027110 | orchestrator | 2026-01-10 14:28:37 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:37.029341 | orchestrator | 2026-01-10 14:28:37 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:37.032227 | orchestrator | 2026-01-10 14:28:37 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:37.036898 | orchestrator | 2026-01-10 14:28:37 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:37.036986 | orchestrator | 2026-01-10 14:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:40.086160 | orchestrator | 2026-01-10 14:28:40 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:40.086540 | orchestrator | 2026-01-10 14:28:40 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:40.088623 | orchestrator | 2026-01-10 14:28:40 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:40.090188 | orchestrator | 2026-01-10 14:28:40 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:40.090236 | orchestrator | 2026-01-10 14:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:43.135155 | orchestrator | 2026-01-10 14:28:43 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:43.136160 | orchestrator | 2026-01-10 14:28:43 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:43.139555 | orchestrator | 2026-01-10 14:28:43 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:43.141650 | orchestrator | 2026-01-10 14:28:43 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:43.141702 | orchestrator | 2026-01-10 14:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:46.202248 | orchestrator | 2026-01-10 14:28:46 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:46.204868 | orchestrator | 2026-01-10 14:28:46 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:46.208987 | orchestrator | 2026-01-10 14:28:46 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:46.211906 | orchestrator | 2026-01-10 14:28:46 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:46.213360 | orchestrator | 2026-01-10 14:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:49.281898 | orchestrator | 2026-01-10 14:28:49 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:49.284942 | orchestrator | 2026-01-10 14:28:49 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:49.290205 | orchestrator | 2026-01-10 14:28:49 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:49.300066 | orchestrator | 2026-01-10 14:28:49 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state STARTED 2026-01-10 14:28:49.300108 | orchestrator | 2026-01-10 14:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:52.356293 | orchestrator | 2026-01-10 14:28:52 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:52.358286 | orchestrator | 2026-01-10 14:28:52 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:52.360144 | orchestrator | 2026-01-10 14:28:52 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:52.361366 | orchestrator | 2026-01-10 14:28:52 | INFO  | Task 03c495f7-7d2d-4c46-973b-6478ce0e0519 is in state SUCCESS 2026-01-10 14:28:52.361836 | orchestrator | 2026-01-10 14:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:55.442289 | orchestrator | 2026-01-10 14:28:55 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:55.447034 | orchestrator | 2026-01-10 14:28:55 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:55.448035 | orchestrator | 2026-01-10 14:28:55 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:55.448072 | orchestrator | 2026-01-10 14:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:28:58.494684 | orchestrator | 2026-01-10 14:28:58 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:28:58.495134 | orchestrator | 2026-01-10 14:28:58 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:28:58.496101 | orchestrator | 2026-01-10 14:28:58 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:28:58.496114 | orchestrator | 2026-01-10 14:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:01.549012 | orchestrator | 2026-01-10 14:29:01 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:01.550733 | orchestrator | 2026-01-10 14:29:01 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:29:01.552148 | orchestrator | 2026-01-10 14:29:01 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:01.552539 | orchestrator | 2026-01-10 14:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:04.607230 | orchestrator | 2026-01-10 14:29:04 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:04.607558 | orchestrator | 2026-01-10 14:29:04 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:29:04.609527 | orchestrator | 2026-01-10 14:29:04 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:04.609557 | orchestrator | 2026-01-10 14:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:07.655745 | orchestrator | 2026-01-10 14:29:07 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:07.656514 | orchestrator | 2026-01-10 14:29:07 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:29:07.658526 | orchestrator | 2026-01-10 14:29:07 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:07.658597 | orchestrator | 2026-01-10 14:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:10.704168 | orchestrator | 2026-01-10 14:29:10 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:10.704532 | orchestrator | 2026-01-10 14:29:10 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state STARTED 2026-01-10 14:29:10.705587 | orchestrator | 2026-01-10 14:29:10 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:10.705638 | orchestrator | 2026-01-10 14:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:13.748114 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:13.748312 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:13.756870 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 916dffd3-8d2c-4ea8-ad98-d7b367127a72 is in state SUCCESS 2026-01-10 14:29:13.760244 | orchestrator | 2026-01-10 14:29:13.760318 | orchestrator | 2026-01-10 14:29:13.760335 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-10 14:29:13.760349 | orchestrator | 2026-01-10 14:29:13.760363 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-10 14:29:13.760394 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.186) 0:00:00.186 ****** 2026-01-10 14:29:13.760409 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:13.760423 | orchestrator | 2026-01-10 14:29:13.760435 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-10 14:29:13.760449 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:00.988) 0:00:01.175 ****** 2026-01-10 14:29:13.760461 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-10 14:29:13.760474 | orchestrator | 2026-01-10 14:29:13.760488 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-10 14:29:13.760501 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.595) 0:00:01.770 ****** 2026-01-10 14:29:13.760514 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.760529 | orchestrator | 2026-01-10 14:29:13.760542 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-10 14:29:13.760555 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:01.263) 0:00:03.034 ****** 2026-01-10 14:29:13.760584 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-10 14:29:13.760600 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:13.760612 | orchestrator | 2026-01-10 14:29:13.760624 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-10 14:29:13.760636 | orchestrator | Saturday 10 January 2026 14:28:46 +0000 (0:01:18.649) 0:01:21.684 ****** 2026-01-10 14:29:13.760677 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.760700 | orchestrator | 2026-01-10 14:29:13.760724 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:13.760737 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:13.760754 | orchestrator | 2026-01-10 14:29:13.760801 | orchestrator | 2026-01-10 14:29:13.760814 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:13.760825 | orchestrator | Saturday 10 January 2026 14:28:50 +0000 (0:00:04.300) 0:01:25.984 ****** 2026-01-10 14:29:13.760835 | orchestrator | =============================================================================== 2026-01-10 14:29:13.760846 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 78.65s 2026-01-10 14:29:13.760869 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.30s 2026-01-10 14:29:13.760881 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.26s 2026-01-10 14:29:13.760893 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.99s 2026-01-10 14:29:13.760904 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2026-01-10 14:29:13.760916 | orchestrator | 2026-01-10 14:29:13.760926 | orchestrator | 2026-01-10 14:29:13.760960 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-10 14:29:13.760973 | orchestrator | 2026-01-10 14:29:13.760984 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:29:13.760997 | orchestrator | Saturday 10 January 2026 14:26:57 +0000 (0:00:00.210) 0:00:00.210 ****** 2026-01-10 14:29:13.761009 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:29:13.761023 | orchestrator | 2026-01-10 14:29:13.761035 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-10 14:29:13.761046 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:01.174) 0:00:01.385 ****** 2026-01-10 14:29:13.761057 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761068 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761079 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761090 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761102 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761114 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761124 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761135 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761148 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761159 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761170 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761182 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761192 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761204 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:29:13.761215 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761227 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761258 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761280 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:29:13.761291 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761302 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761315 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:29:13.761326 | orchestrator | 2026-01-10 14:29:13.761338 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:29:13.761350 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:04.258) 0:00:05.644 ****** 2026-01-10 14:29:13.761361 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:29:13.761401 | orchestrator | 2026-01-10 14:29:13.761414 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-10 14:29:13.761427 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:01.318) 0:00:06.962 ****** 2026-01-10 14:29:13.761444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761487 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761642 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.761668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761757 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.761794 | orchestrator | 2026-01-10 14:29:13.761802 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-10 14:29:13.761809 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:05.007) 0:00:11.970 ****** 2026-01-10 14:29:13.761825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.761833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761853 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:13.761861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.761869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761884 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:29:13.761892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.761900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.761937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761952 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:29:13.761959 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:29:13.761966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.761973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.761987 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:29:13.761994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762102 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:13.762109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762116 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:13.762123 | orchestrator | 2026-01-10 14:29:13.762129 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-10 14:29:13.762136 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:01.636) 0:00:13.606 ****** 2026-01-10 14:29:13.762143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762174 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:13.762181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762233 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:29:13.762240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.762969 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:29:13.762984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.762997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.763060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763100 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:29:13.763113 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:29:13.763131 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:13.763143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:29:13.763156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.763179 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:13.763191 | orchestrator | 2026-01-10 14:29:13.763203 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-10 14:29:13.763216 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:02.342) 0:00:15.949 ****** 2026-01-10 14:29:13.763228 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:13.763239 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:29:13.763251 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:29:13.763262 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:29:13.763273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:29:13.763285 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:13.763296 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:13.763307 | orchestrator | 2026-01-10 14:29:13.763320 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-10 14:29:13.763332 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:01.612) 0:00:17.561 ****** 2026-01-10 14:29:13.763344 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:13.763364 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:29:13.763419 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:29:13.763434 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:29:13.763447 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:29:13.763459 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:29:13.763471 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:29:13.763483 | orchestrator | 2026-01-10 14:29:13.763495 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-10 14:29:13.763508 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:01.238) 0:00:18.800 ****** 2026-01-10 14:29:13.763521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763606 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.763687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763724 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763823 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.763848 | orchestrator | 2026-01-10 14:29:13.763860 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-10 14:29:13.763872 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:06.518) 0:00:25.319 ****** 2026-01-10 14:29:13.763892 | orchestrator | [WARNING]: Skipped 2026-01-10 14:29:13.763907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-10 14:29:13.763919 | orchestrator | to this access issue: 2026-01-10 14:29:13.763931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-10 14:29:13.763943 | orchestrator | directory 2026-01-10 14:29:13.763956 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:29:13.763967 | orchestrator | 2026-01-10 14:29:13.763979 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-10 14:29:13.763991 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:01.563) 0:00:26.882 ****** 2026-01-10 14:29:13.764003 | orchestrator | [WARNING]: Skipped 2026-01-10 14:29:13.764015 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-10 14:29:13.764026 | orchestrator | to this access issue: 2026-01-10 14:29:13.764038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-10 14:29:13.764049 | orchestrator | directory 2026-01-10 14:29:13.764061 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:29:13.764072 | orchestrator | 2026-01-10 14:29:13.764084 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-10 14:29:13.764095 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:01.461) 0:00:28.343 ****** 2026-01-10 14:29:13.764107 | orchestrator | [WARNING]: Skipped 2026-01-10 14:29:13.764118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-10 14:29:13.764130 | orchestrator | to this access issue: 2026-01-10 14:29:13.764141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-10 14:29:13.764153 | orchestrator | directory 2026-01-10 14:29:13.764164 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:29:13.764176 | orchestrator | 2026-01-10 14:29:13.764188 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-10 14:29:13.764200 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:00.947) 0:00:29.291 ****** 2026-01-10 14:29:13.764211 | orchestrator | [WARNING]: Skipped 2026-01-10 14:29:13.764223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-10 14:29:13.764234 | orchestrator | to this access issue: 2026-01-10 14:29:13.764247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-10 14:29:13.764259 | orchestrator | directory 2026-01-10 14:29:13.764272 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:29:13.764284 | orchestrator | 2026-01-10 14:29:13.764296 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-10 14:29:13.764308 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:00.704) 0:00:29.995 ****** 2026-01-10 14:29:13.764320 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.764332 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.764344 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.764354 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.764365 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.764434 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.764451 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.764462 | orchestrator | 2026-01-10 14:29:13.764473 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-10 14:29:13.764485 | orchestrator | Saturday 10 January 2026 14:27:31 +0000 (0:00:04.192) 0:00:34.187 ****** 2026-01-10 14:29:13.764496 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764554 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764567 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764578 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764590 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:29:13.764600 | orchestrator | 2026-01-10 14:29:13.764610 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-10 14:29:13.764622 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:03.200) 0:00:37.387 ****** 2026-01-10 14:29:13.764632 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.764643 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.764654 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.764664 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.764674 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.764686 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.764697 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.764708 | orchestrator | 2026-01-10 14:29:13.764719 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-10 14:29:13.764731 | orchestrator | Saturday 10 January 2026 14:27:37 +0000 (0:00:02.380) 0:00:39.768 ****** 2026-01-10 14:29:13.764743 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764836 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.764852 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764875 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.764887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.764898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.764910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764949 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.764961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.764979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.764991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:29:13.765015 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765037 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765049 | orchestrator | 2026-01-10 14:29:13.765060 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-10 14:29:13.765072 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:02.955) 0:00:42.724 ****** 2026-01-10 14:29:13.765084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765096 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765143 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765154 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765165 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:29:13.765177 | orchestrator | 2026-01-10 14:29:13.765188 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-10 14:29:13.765199 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:02.983) 0:00:45.708 ****** 2026-01-10 14:29:13.765211 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765222 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765233 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765255 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765266 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765277 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:29:13.765288 | orchestrator | 2026-01-10 14:29:13.765299 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-10 14:29:13.765310 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:02.105) 0:00:47.813 ****** 2026-01-10 14:29:13.765321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765334 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765446 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:29:13.765477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765501 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:29:13.765633 | orchestrator | 2026-01-10 14:29:13.765648 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-10 14:29:13.765664 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:02.695) 0:00:50.509 ****** 2026-01-10 14:29:13.765675 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.765686 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.765696 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.765706 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.765716 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.765726 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.765737 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.765748 | orchestrator | 2026-01-10 14:29:13.765758 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-10 14:29:13.765768 | orchestrator | Saturday 10 January 2026 14:27:49 +0000 (0:00:01.348) 0:00:51.857 ****** 2026-01-10 14:29:13.765778 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.765788 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.765797 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.765807 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.765817 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.765828 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.765839 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.765849 | orchestrator | 2026-01-10 14:29:13.765859 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.765871 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:01.294) 0:00:53.152 ****** 2026-01-10 14:29:13.765881 | orchestrator | 2026-01-10 14:29:13.765892 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.765909 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:00.066) 0:00:53.219 ****** 2026-01-10 14:29:13.765920 | orchestrator | 2026-01-10 14:29:13.765930 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.765941 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.061) 0:00:53.280 ****** 2026-01-10 14:29:13.765951 | orchestrator | 2026-01-10 14:29:13.765961 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.765972 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.228) 0:00:53.509 ****** 2026-01-10 14:29:13.765982 | orchestrator | 2026-01-10 14:29:13.765992 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.766003 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.063) 0:00:53.572 ****** 2026-01-10 14:29:13.766047 | orchestrator | 2026-01-10 14:29:13.766060 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.766071 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.059) 0:00:53.631 ****** 2026-01-10 14:29:13.766081 | orchestrator | 2026-01-10 14:29:13.766091 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:29:13.766101 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.064) 0:00:53.696 ****** 2026-01-10 14:29:13.766112 | orchestrator | 2026-01-10 14:29:13.766122 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-10 14:29:13.766133 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.086) 0:00:53.783 ****** 2026-01-10 14:29:13.766143 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.766154 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.766164 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.766175 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.766185 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.766196 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.766206 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.766216 | orchestrator | 2026-01-10 14:29:13.766227 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-10 14:29:13.766238 | orchestrator | Saturday 10 January 2026 14:28:28 +0000 (0:00:36.555) 0:01:30.338 ****** 2026-01-10 14:29:13.766248 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.766258 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.766269 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.766279 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.766289 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.766300 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.766310 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.766320 | orchestrator | 2026-01-10 14:29:13.766330 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-10 14:29:13.766341 | orchestrator | Saturday 10 January 2026 14:29:00 +0000 (0:00:32.304) 0:02:02.643 ****** 2026-01-10 14:29:13.766351 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:13.766362 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:29:13.766372 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:29:13.766403 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:29:13.766414 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:29:13.766425 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:29:13.766435 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:29:13.766445 | orchestrator | 2026-01-10 14:29:13.766456 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-10 14:29:13.766467 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:02.550) 0:02:05.194 ****** 2026-01-10 14:29:13.766477 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:13.766488 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:13.766497 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:13.766503 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:29:13.766512 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:29:13.766534 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:29:13.766544 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:13.766554 | orchestrator | 2026-01-10 14:29:13.766565 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:13.766578 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766589 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766705 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766728 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766740 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766750 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766760 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:29:13.766808 | orchestrator | 2026-01-10 14:29:13.766820 | orchestrator | 2026-01-10 14:29:13.766831 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:13.766843 | orchestrator | Saturday 10 January 2026 14:29:11 +0000 (0:00:08.688) 0:02:13.882 ****** 2026-01-10 14:29:13.766854 | orchestrator | =============================================================================== 2026-01-10 14:29:13.766865 | orchestrator | common : Restart fluentd container ------------------------------------- 36.56s 2026-01-10 14:29:13.766875 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.30s 2026-01-10 14:29:13.766885 | orchestrator | common : Restart cron container ----------------------------------------- 8.69s 2026-01-10 14:29:13.766896 | orchestrator | common : Copying over config.json files for services -------------------- 6.52s 2026-01-10 14:29:13.766906 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.01s 2026-01-10 14:29:13.766917 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.26s 2026-01-10 14:29:13.766927 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.19s 2026-01-10 14:29:13.766938 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.20s 2026-01-10 14:29:13.766948 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.98s 2026-01-10 14:29:13.766959 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.96s 2026-01-10 14:29:13.766966 | orchestrator | common : Check common containers ---------------------------------------- 2.70s 2026-01-10 14:29:13.766972 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.55s 2026-01-10 14:29:13.766979 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.38s 2026-01-10 14:29:13.766985 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.34s 2026-01-10 14:29:13.766991 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.11s 2026-01-10 14:29:13.766997 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.64s 2026-01-10 14:29:13.767004 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.61s 2026-01-10 14:29:13.767010 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.56s 2026-01-10 14:29:13.767016 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.46s 2026-01-10 14:29:13.767032 | orchestrator | common : Creating log volume -------------------------------------------- 1.35s 2026-01-10 14:29:13.767038 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:13.767045 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:13.767052 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state STARTED 2026-01-10 14:29:13.767058 | orchestrator | 2026-01-10 14:29:13 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:13.767065 | orchestrator | 2026-01-10 14:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:16.791562 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:16.791668 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:16.792117 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:16.792718 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:16.793457 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state STARTED 2026-01-10 14:29:16.794123 | orchestrator | 2026-01-10 14:29:16 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:16.794268 | orchestrator | 2026-01-10 14:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:19.818443 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:19.818861 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:19.819579 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:19.820182 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:19.821055 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state STARTED 2026-01-10 14:29:19.823235 | orchestrator | 2026-01-10 14:29:19 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:19.823282 | orchestrator | 2026-01-10 14:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:22.844470 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:22.845780 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:22.846796 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:22.849365 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:22.850983 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state STARTED 2026-01-10 14:29:22.851692 | orchestrator | 2026-01-10 14:29:22 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:22.851831 | orchestrator | 2026-01-10 14:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:25.888510 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:25.889217 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:25.890719 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:25.894204 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:25.895441 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state STARTED 2026-01-10 14:29:25.896530 | orchestrator | 2026-01-10 14:29:25 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:25.896555 | orchestrator | 2026-01-10 14:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:28.932069 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:28.934286 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:28.936757 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:28.938269 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:28.940961 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:28.942310 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task 6bb63d92-cd59-40af-92b3-45567bbfc091 is in state SUCCESS 2026-01-10 14:29:28.947850 | orchestrator | 2026-01-10 14:29:28 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:28.947904 | orchestrator | 2026-01-10 14:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:31.985100 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:31.985307 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:31.986104 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:31.987510 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:31.989368 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:31.990470 | orchestrator | 2026-01-10 14:29:31 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:31.990511 | orchestrator | 2026-01-10 14:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:35.055102 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:35.055212 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:35.055225 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:35.055232 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:35.055239 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:35.055246 | orchestrator | 2026-01-10 14:29:35 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:35.055253 | orchestrator | 2026-01-10 14:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:38.067917 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:38.068182 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:38.068870 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:38.069670 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:38.072347 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:38.073912 | orchestrator | 2026-01-10 14:29:38 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:38.073942 | orchestrator | 2026-01-10 14:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:41.194303 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:41.194404 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:41.446852 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:41.450154 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:41.451102 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:41.453437 | orchestrator | 2026-01-10 14:29:41 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:41.453487 | orchestrator | 2026-01-10 14:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:44.508063 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:44.508990 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:44.509011 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:44.509019 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state STARTED 2026-01-10 14:29:44.509025 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:44.509031 | orchestrator | 2026-01-10 14:29:44 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:44.509038 | orchestrator | 2026-01-10 14:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:47.570494 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:47.570601 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:47.570934 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:47.572105 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task 81f3db8c-2ba0-4418-b9c6-33246e3a63cd is in state SUCCESS 2026-01-10 14:29:47.574678 | orchestrator | 2026-01-10 14:29:47.574782 | orchestrator | 2026-01-10 14:29:47.574807 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:29:47.574829 | orchestrator | 2026-01-10 14:29:47.574848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:29:47.574866 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.198) 0:00:00.198 ****** 2026-01-10 14:29:47.574886 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:29:47.574905 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:29:47.574958 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:29:47.574978 | orchestrator | 2026-01-10 14:29:47.574998 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:29:47.575035 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.297) 0:00:00.495 ****** 2026-01-10 14:29:47.575159 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-10 14:29:47.575181 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-10 14:29:47.575198 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-10 14:29:47.575217 | orchestrator | 2026-01-10 14:29:47.575236 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-10 14:29:47.575254 | orchestrator | 2026-01-10 14:29:47.575273 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-10 14:29:47.575293 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.438) 0:00:00.934 ****** 2026-01-10 14:29:47.575314 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:29:47.575335 | orchestrator | 2026-01-10 14:29:47.575354 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-10 14:29:47.575373 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.401) 0:00:01.335 ****** 2026-01-10 14:29:47.575393 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:29:47.575413 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:29:47.575433 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:29:47.575453 | orchestrator | 2026-01-10 14:29:47.575498 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-10 14:29:47.575516 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:00.683) 0:00:02.018 ****** 2026-01-10 14:29:47.575534 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:29:47.575554 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:29:47.575573 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:29:47.575592 | orchestrator | 2026-01-10 14:29:47.575610 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-10 14:29:47.575630 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:01.640) 0:00:03.659 ****** 2026-01-10 14:29:47.575649 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:47.575668 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:47.575685 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:47.575703 | orchestrator | 2026-01-10 14:29:47.575724 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-10 14:29:47.575744 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:01.611) 0:00:05.270 ****** 2026-01-10 14:29:47.575763 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:47.575781 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:47.575801 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:47.575820 | orchestrator | 2026-01-10 14:29:47.575838 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:47.575858 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.575879 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.575899 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.575917 | orchestrator | 2026-01-10 14:29:47.575936 | orchestrator | 2026-01-10 14:29:47.575955 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:47.575974 | orchestrator | Saturday 10 January 2026 14:29:25 +0000 (0:00:03.029) 0:00:08.300 ****** 2026-01-10 14:29:47.575992 | orchestrator | =============================================================================== 2026-01-10 14:29:47.576027 | orchestrator | memcached : Restart memcached container --------------------------------- 3.03s 2026-01-10 14:29:47.576046 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.64s 2026-01-10 14:29:47.576065 | orchestrator | memcached : Check memcached container ----------------------------------- 1.61s 2026-01-10 14:29:47.576084 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-01-10 14:29:47.576102 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-01-10 14:29:47.576121 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.40s 2026-01-10 14:29:47.576140 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-01-10 14:29:47.576158 | orchestrator | 2026-01-10 14:29:47.576178 | orchestrator | 2026-01-10 14:29:47.576196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:29:47.576216 | orchestrator | 2026-01-10 14:29:47.576235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:29:47.576253 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.312) 0:00:00.312 ****** 2026-01-10 14:29:47.576271 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:29:47.576290 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:29:47.576308 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:29:47.576327 | orchestrator | 2026-01-10 14:29:47.576347 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:29:47.576393 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.265) 0:00:00.578 ****** 2026-01-10 14:29:47.576412 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-10 14:29:47.576431 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-10 14:29:47.576450 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-10 14:29:47.576507 | orchestrator | 2026-01-10 14:29:47.576527 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-10 14:29:47.576546 | orchestrator | 2026-01-10 14:29:47.576584 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-10 14:29:47.576617 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.354) 0:00:00.932 ****** 2026-01-10 14:29:47.576636 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:29:47.576655 | orchestrator | 2026-01-10 14:29:47.576673 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-10 14:29:47.576690 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.536) 0:00:01.468 ****** 2026-01-10 14:29:47.576713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576878 | orchestrator | 2026-01-10 14:29:47.576896 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-10 14:29:47.576921 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:01.204) 0:00:02.673 ****** 2026-01-10 14:29:47.576939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.576977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577075 | orchestrator | 2026-01-10 14:29:47.577092 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-10 14:29:47.577109 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:02.574) 0:00:05.247 ****** 2026-01-10 14:29:47.577210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577336 | orchestrator | 2026-01-10 14:29:47.577363 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-10 14:29:47.577383 | orchestrator | Saturday 10 January 2026 14:29:25 +0000 (0:00:02.607) 0:00:07.855 ****** 2026-01-10 14:29:47.577408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:29:47.577618 | orchestrator | 2026-01-10 14:29:47.577635 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:29:47.577653 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:01.780) 0:00:09.635 ****** 2026-01-10 14:29:47.577671 | orchestrator | 2026-01-10 14:29:47.577689 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:29:47.577719 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:00.151) 0:00:09.787 ****** 2026-01-10 14:29:47.577738 | orchestrator | 2026-01-10 14:29:47.577756 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:29:47.577773 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.124) 0:00:09.911 ****** 2026-01-10 14:29:47.577791 | orchestrator | 2026-01-10 14:29:47.577809 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-10 14:29:47.577826 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:00.106) 0:00:10.017 ****** 2026-01-10 14:29:47.577843 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:47.577861 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:47.577880 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:47.577898 | orchestrator | 2026-01-10 14:29:47.577925 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-10 14:29:47.577943 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:07.860) 0:00:17.878 ****** 2026-01-10 14:29:47.577972 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:29:47.577991 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:29:47.578009 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:29:47.578094 | orchestrator | 2026-01-10 14:29:47.578110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:47.578126 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.578143 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.578186 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:29:47.578204 | orchestrator | 2026-01-10 14:29:47.578220 | orchestrator | 2026-01-10 14:29:47.578235 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:47.578250 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:09.548) 0:00:27.426 ****** 2026-01-10 14:29:47.578265 | orchestrator | =============================================================================== 2026-01-10 14:29:47.578280 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.55s 2026-01-10 14:29:47.578296 | orchestrator | redis : Restart redis container ----------------------------------------- 7.86s 2026-01-10 14:29:47.578311 | orchestrator | redis : Copying over redis config files --------------------------------- 2.61s 2026-01-10 14:29:47.578327 | orchestrator | redis : Copying over default config.json files -------------------------- 2.57s 2026-01-10 14:29:47.578342 | orchestrator | redis : Check redis containers ------------------------------------------ 1.78s 2026-01-10 14:29:47.578357 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.20s 2026-01-10 14:29:47.578372 | orchestrator | redis : include_tasks --------------------------------------------------- 0.54s 2026-01-10 14:29:47.578386 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.38s 2026-01-10 14:29:47.578400 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-01-10 14:29:47.578415 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-01-10 14:29:47.578430 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:47.578483 | orchestrator | 2026-01-10 14:29:47 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:47.578503 | orchestrator | 2026-01-10 14:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:50.793963 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:50.796131 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:50.796851 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:50.798055 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:50.799300 | orchestrator | 2026-01-10 14:29:50 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:50.799733 | orchestrator | 2026-01-10 14:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:53.835730 | orchestrator | 2026-01-10 14:29:53 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:53.835889 | orchestrator | 2026-01-10 14:29:53 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:53.836882 | orchestrator | 2026-01-10 14:29:53 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:53.837613 | orchestrator | 2026-01-10 14:29:53 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:53.838329 | orchestrator | 2026-01-10 14:29:53 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:53.838357 | orchestrator | 2026-01-10 14:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:56.884211 | orchestrator | 2026-01-10 14:29:56 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:56.885826 | orchestrator | 2026-01-10 14:29:56 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:56.887956 | orchestrator | 2026-01-10 14:29:56 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:56.892975 | orchestrator | 2026-01-10 14:29:56 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:56.893467 | orchestrator | 2026-01-10 14:29:56 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:56.893527 | orchestrator | 2026-01-10 14:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:59.934421 | orchestrator | 2026-01-10 14:29:59 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:29:59.934569 | orchestrator | 2026-01-10 14:29:59 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:29:59.934582 | orchestrator | 2026-01-10 14:29:59 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:29:59.934776 | orchestrator | 2026-01-10 14:29:59 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:29:59.935329 | orchestrator | 2026-01-10 14:29:59 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:29:59.935348 | orchestrator | 2026-01-10 14:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:02.986754 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:02.987674 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:02.989188 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:02.991857 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:30:02.993152 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:02.993172 | orchestrator | 2026-01-10 14:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:06.130354 | orchestrator | 2026-01-10 14:30:06 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:06.130666 | orchestrator | 2026-01-10 14:30:06 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:06.131797 | orchestrator | 2026-01-10 14:30:06 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:06.132846 | orchestrator | 2026-01-10 14:30:06 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:30:06.133531 | orchestrator | 2026-01-10 14:30:06 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:06.133672 | orchestrator | 2026-01-10 14:30:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:09.299658 | orchestrator | 2026-01-10 14:30:09 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:09.301193 | orchestrator | 2026-01-10 14:30:09 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:09.302300 | orchestrator | 2026-01-10 14:30:09 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:09.303227 | orchestrator | 2026-01-10 14:30:09 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:30:09.304119 | orchestrator | 2026-01-10 14:30:09 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:09.304702 | orchestrator | 2026-01-10 14:30:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:12.340962 | orchestrator | 2026-01-10 14:30:12 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:12.341828 | orchestrator | 2026-01-10 14:30:12 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:12.341863 | orchestrator | 2026-01-10 14:30:12 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:12.342331 | orchestrator | 2026-01-10 14:30:12 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:30:12.343935 | orchestrator | 2026-01-10 14:30:12 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:12.343963 | orchestrator | 2026-01-10 14:30:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:15.373575 | orchestrator | 2026-01-10 14:30:15 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:15.373667 | orchestrator | 2026-01-10 14:30:15 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:15.376458 | orchestrator | 2026-01-10 14:30:15 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:15.377203 | orchestrator | 2026-01-10 14:30:15 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state STARTED 2026-01-10 14:30:15.379815 | orchestrator | 2026-01-10 14:30:15 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:15.379860 | orchestrator | 2026-01-10 14:30:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:18.426648 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:18.427039 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:18.428882 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:18.430633 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task 771905ba-4499-4d7d-829a-5e3a3445b05e is in state SUCCESS 2026-01-10 14:30:18.432725 | orchestrator | 2026-01-10 14:30:18.432779 | orchestrator | 2026-01-10 14:30:18.432789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:30:18.432797 | orchestrator | 2026-01-10 14:30:18.432804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:30:18.432811 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.243) 0:00:00.243 ****** 2026-01-10 14:30:18.432818 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:30:18.432827 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:30:18.432833 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:30:18.432840 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:30:18.432846 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:30:18.432852 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:30:18.432858 | orchestrator | 2026-01-10 14:30:18.432865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:30:18.432871 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.599) 0:00:00.843 ****** 2026-01-10 14:30:18.432878 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432908 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432915 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432922 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432928 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432935 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:30:18.432941 | orchestrator | 2026-01-10 14:30:18.432948 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-10 14:30:18.432954 | orchestrator | 2026-01-10 14:30:18.432960 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-10 14:30:18.432967 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:00.787) 0:00:01.631 ****** 2026-01-10 14:30:18.432975 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:30:18.432984 | orchestrator | 2026-01-10 14:30:18.432991 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:30:18.432997 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:01.152) 0:00:02.783 ****** 2026-01-10 14:30:18.433004 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:30:18.433011 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:30:18.433017 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:30:18.433023 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:30:18.433030 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:30:18.433036 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:30:18.433042 | orchestrator | 2026-01-10 14:30:18.433049 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:30:18.433055 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:01.233) 0:00:04.017 ****** 2026-01-10 14:30:18.433062 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:30:18.433069 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:30:18.433075 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:30:18.433081 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:30:18.433088 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:30:18.433093 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:30:18.433099 | orchestrator | 2026-01-10 14:30:18.433105 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:30:18.433111 | orchestrator | Saturday 10 January 2026 14:29:22 +0000 (0:00:01.387) 0:00:05.405 ****** 2026-01-10 14:30:18.433117 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-10 14:30:18.433124 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:30:18.433131 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-10 14:30:18.433138 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:30:18.433144 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-10 14:30:18.433150 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:30:18.433156 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-10 14:30:18.433163 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:18.433169 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-10 14:30:18.433176 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:18.433182 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-10 14:30:18.433189 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:18.433195 | orchestrator | 2026-01-10 14:30:18.433212 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-10 14:30:18.433227 | orchestrator | Saturday 10 January 2026 14:29:24 +0000 (0:00:01.102) 0:00:06.507 ****** 2026-01-10 14:30:18.433234 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:30:18.433240 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:30:18.433247 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:30:18.433253 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:18.433260 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:18.433266 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:18.433273 | orchestrator | 2026-01-10 14:30:18.433279 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-10 14:30:18.433286 | orchestrator | Saturday 10 January 2026 14:29:24 +0000 (0:00:00.621) 0:00:07.129 ****** 2026-01-10 14:30:18.433315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433440 | orchestrator | 2026-01-10 14:30:18.433447 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-10 14:30:18.433454 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:01.578) 0:00:08.707 ****** 2026-01-10 14:30:18.433461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433606 | orchestrator | 2026-01-10 14:30:18.433613 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-10 14:30:18.433619 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:03.214) 0:00:11.921 ****** 2026-01-10 14:30:18.433626 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:30:18.433632 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:30:18.433639 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:30:18.433645 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:18.433651 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:18.433658 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:18.433663 | orchestrator | 2026-01-10 14:30:18.433669 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-10 14:30:18.433676 | orchestrator | Saturday 10 January 2026 14:29:30 +0000 (0:00:00.989) 0:00:12.911 ****** 2026-01-10 14:30:18.433697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:30:18.433812 | orchestrator | 2026-01-10 14:30:18.433818 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433825 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:02.326) 0:00:15.238 ****** 2026-01-10 14:30:18.433832 | orchestrator | 2026-01-10 14:30:18.433838 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433849 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.481) 0:00:15.719 ****** 2026-01-10 14:30:18.433855 | orchestrator | 2026-01-10 14:30:18.433862 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433868 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.195) 0:00:15.915 ****** 2026-01-10 14:30:18.433874 | orchestrator | 2026-01-10 14:30:18.433880 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433887 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.172) 0:00:16.088 ****** 2026-01-10 14:30:18.433894 | orchestrator | 2026-01-10 14:30:18.433900 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433907 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.241) 0:00:16.329 ****** 2026-01-10 14:30:18.433913 | orchestrator | 2026-01-10 14:30:18.433920 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:30:18.433926 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.184) 0:00:16.513 ****** 2026-01-10 14:30:18.433932 | orchestrator | 2026-01-10 14:30:18.433939 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-10 14:30:18.433945 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.130) 0:00:16.644 ****** 2026-01-10 14:30:18.433952 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:30:18.433958 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:30:18.433965 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:30:18.433971 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:30:18.433978 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:30:18.433984 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:30:18.433990 | orchestrator | 2026-01-10 14:30:18.433997 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-10 14:30:18.434004 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:10.089) 0:00:26.733 ****** 2026-01-10 14:30:18.434011 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:30:18.434126 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:30:18.434136 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:30:18.434142 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:30:18.434149 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:30:18.434155 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:30:18.434162 | orchestrator | 2026-01-10 14:30:18.434168 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:30:18.434174 | orchestrator | Saturday 10 January 2026 14:29:47 +0000 (0:00:03.004) 0:00:29.738 ****** 2026-01-10 14:30:18.434180 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:30:18.434187 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:30:18.434193 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:30:18.434199 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:30:18.434206 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:30:18.434212 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:30:18.434217 | orchestrator | 2026-01-10 14:30:18.434228 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-10 14:30:18.434235 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:04.816) 0:00:34.554 ****** 2026-01-10 14:30:18.434241 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-10 14:30:18.434247 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-10 14:30:18.434254 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-10 14:30:18.434261 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-10 14:30:18.434267 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-10 14:30:18.434280 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-10 14:30:18.434293 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-10 14:30:18.434300 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-10 14:30:18.434306 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-10 14:30:18.434312 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-10 14:30:18.434319 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-10 14:30:18.434325 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-10 14:30:18.434332 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434338 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434345 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434351 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434357 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434363 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:30:18.434369 | orchestrator | 2026-01-10 14:30:18.434374 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-10 14:30:18.434380 | orchestrator | Saturday 10 January 2026 14:29:59 +0000 (0:00:07.523) 0:00:42.077 ****** 2026-01-10 14:30:18.434386 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-10 14:30:18.434393 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:18.434399 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-10 14:30:18.434406 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:18.434412 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-10 14:30:18.434419 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:18.434426 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-10 14:30:18.434432 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-10 14:30:18.434438 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-10 14:30:18.434445 | orchestrator | 2026-01-10 14:30:18.434451 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-10 14:30:18.434457 | orchestrator | Saturday 10 January 2026 14:30:02 +0000 (0:00:02.509) 0:00:44.587 ****** 2026-01-10 14:30:18.434464 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:30:18.434471 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:30:18.434477 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:30:18.434483 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:30:18.434489 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:30:18.434496 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:30:18.434503 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:30:18.434509 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:30:18.434515 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:30:18.434522 | orchestrator | 2026-01-10 14:30:18.434546 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:30:18.434553 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:03.982) 0:00:48.570 ****** 2026-01-10 14:30:18.434559 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:30:18.434570 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:30:18.434577 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:30:18.434583 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:30:18.434590 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:30:18.434596 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:30:18.434603 | orchestrator | 2026-01-10 14:30:18.434609 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:30:18.434621 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:30:18.434629 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:30:18.434636 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:30:18.434643 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:30:18.434649 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:30:18.434661 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:30:18.434667 | orchestrator | 2026-01-10 14:30:18.434674 | orchestrator | 2026-01-10 14:30:18.434681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:30:18.434688 | orchestrator | Saturday 10 January 2026 14:30:15 +0000 (0:00:08.845) 0:00:57.416 ****** 2026-01-10 14:30:18.434694 | orchestrator | =============================================================================== 2026-01-10 14:30:18.434701 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.66s 2026-01-10 14:30:18.434708 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.09s 2026-01-10 14:30:18.434714 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.52s 2026-01-10 14:30:18.434721 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.98s 2026-01-10 14:30:18.434727 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.21s 2026-01-10 14:30:18.434733 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.00s 2026-01-10 14:30:18.434739 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.51s 2026-01-10 14:30:18.434745 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.33s 2026-01-10 14:30:18.434752 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.58s 2026-01-10 14:30:18.434758 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.41s 2026-01-10 14:30:18.434764 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.39s 2026-01-10 14:30:18.434770 | orchestrator | module-load : Load modules ---------------------------------------------- 1.23s 2026-01-10 14:30:18.434776 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.15s 2026-01-10 14:30:18.434782 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.10s 2026-01-10 14:30:18.434789 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2026-01-10 14:30:18.434795 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-01-10 14:30:18.434802 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.62s 2026-01-10 14:30:18.434808 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-01-10 14:30:18.434815 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:18.434828 | orchestrator | 2026-01-10 14:30:18 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:18.434834 | orchestrator | 2026-01-10 14:30:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:21.460609 | orchestrator | 2026-01-10 14:30:21 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:21.461190 | orchestrator | 2026-01-10 14:30:21 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:21.461786 | orchestrator | 2026-01-10 14:30:21 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:21.463309 | orchestrator | 2026-01-10 14:30:21 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:21.463943 | orchestrator | 2026-01-10 14:30:21 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:21.463991 | orchestrator | 2026-01-10 14:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:24.507230 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:24.507291 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:24.508262 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:24.510340 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:24.511046 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:24.511070 | orchestrator | 2026-01-10 14:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:27.543603 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:27.543926 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:27.544976 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:27.545950 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:27.548830 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:27.548870 | orchestrator | 2026-01-10 14:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:30.613710 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:30.614792 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:30.616210 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:30.616916 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:30.617783 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:30.617806 | orchestrator | 2026-01-10 14:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:33.643319 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:33.643416 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:33.643796 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:33.645627 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:33.646194 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:33.646253 | orchestrator | 2026-01-10 14:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:36.707222 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:36.710255 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:36.711735 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:36.712918 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:36.714721 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:36.714854 | orchestrator | 2026-01-10 14:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:39.751632 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:39.752795 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:39.754543 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:39.755077 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:39.755922 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:39.755958 | orchestrator | 2026-01-10 14:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:42.792150 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:42.793680 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:42.795814 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:42.797331 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:42.800462 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:42.800570 | orchestrator | 2026-01-10 14:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:45.859087 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:45.860915 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:45.865451 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:45.867504 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:45.868646 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:45.868821 | orchestrator | 2026-01-10 14:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:48.918282 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:48.919369 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:48.921888 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:48.922853 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:48.924238 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:48.924280 | orchestrator | 2026-01-10 14:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:51.966835 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:51.967091 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:51.968088 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:51.969424 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:51.970122 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:51.970205 | orchestrator | 2026-01-10 14:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:55.036214 | orchestrator | 2026-01-10 14:30:55 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:55.039873 | orchestrator | 2026-01-10 14:30:55 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:55.043054 | orchestrator | 2026-01-10 14:30:55 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:55.043552 | orchestrator | 2026-01-10 14:30:55 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:55.044564 | orchestrator | 2026-01-10 14:30:55 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:55.044759 | orchestrator | 2026-01-10 14:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:58.106956 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:30:58.110859 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:30:58.112966 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:30:58.114139 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:30:58.116052 | orchestrator | 2026-01-10 14:30:58 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:30:58.116101 | orchestrator | 2026-01-10 14:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:01.184280 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:01.184351 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:01.187007 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:01.187076 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:01.189135 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:01.189189 | orchestrator | 2026-01-10 14:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:04.236708 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:04.236799 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:04.238438 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:04.239206 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:04.241235 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:04.241287 | orchestrator | 2026-01-10 14:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:07.288214 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:07.292722 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:07.293953 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:07.297169 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:07.299896 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:07.300302 | orchestrator | 2026-01-10 14:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:10.442720 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:10.445961 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:10.449898 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:10.449952 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:10.450779 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:10.450815 | orchestrator | 2026-01-10 14:31:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:13.588001 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:13.588333 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:13.588986 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:13.589491 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:13.590072 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:13.590137 | orchestrator | 2026-01-10 14:31:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:16.621608 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:16.621894 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:16.622662 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:16.623396 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:16.624090 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:16.624109 | orchestrator | 2026-01-10 14:31:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:19.660040 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:19.660093 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:19.660110 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:19.660343 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:19.661190 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state STARTED 2026-01-10 14:31:19.661211 | orchestrator | 2026-01-10 14:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:22.701718 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:22.701839 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:22.702643 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:22.703228 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:22.704661 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task 08bae13d-4e59-4d5d-b5b9-4783970f1b48 is in state SUCCESS 2026-01-10 14:31:22.707188 | orchestrator | 2026-01-10 14:31:22.707237 | orchestrator | 2026-01-10 14:31:22.707246 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-10 14:31:22.707253 | orchestrator | 2026-01-10 14:31:22.707260 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-10 14:31:22.707268 | orchestrator | Saturday 10 January 2026 14:26:58 +0000 (0:00:00.198) 0:00:00.198 ****** 2026-01-10 14:31:22.707275 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.707283 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.707289 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.707296 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.707303 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.707309 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.707315 | orchestrator | 2026-01-10 14:31:22.707322 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-10 14:31:22.707328 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.611) 0:00:00.810 ****** 2026-01-10 14:31:22.707335 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.707342 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.707348 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.707355 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.707361 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.707368 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.707375 | orchestrator | 2026-01-10 14:31:22.707381 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-10 14:31:22.707388 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.694) 0:00:01.504 ****** 2026-01-10 14:31:22.707395 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.707401 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.707407 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.707422 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.707429 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.707460 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.707467 | orchestrator | 2026-01-10 14:31:22.707473 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-10 14:31:22.707480 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.662) 0:00:02.167 ****** 2026-01-10 14:31:22.707487 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.707493 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.707500 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.707506 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.707512 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.707519 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.707525 | orchestrator | 2026-01-10 14:31:22.707531 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-10 14:31:22.707538 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:02.745) 0:00:04.913 ****** 2026-01-10 14:31:22.707544 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.707551 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.707557 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.707563 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.707569 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.707576 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.707582 | orchestrator | 2026-01-10 14:31:22.707589 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-10 14:31:22.707596 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:01.204) 0:00:06.118 ****** 2026-01-10 14:31:22.707602 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.707608 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.707615 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.707621 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.707627 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.707633 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.707639 | orchestrator | 2026-01-10 14:31:22.707646 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-10 14:31:22.707686 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:02.006) 0:00:08.125 ****** 2026-01-10 14:31:22.707693 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.707698 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.707704 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.707711 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.707717 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.707724 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.707731 | orchestrator | 2026-01-10 14:31:22.707737 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-10 14:31:22.707744 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:01.137) 0:00:09.262 ****** 2026-01-10 14:31:22.707751 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.707757 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.707764 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.707779 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.707785 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.707792 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.707799 | orchestrator | 2026-01-10 14:31:22.707805 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-10 14:31:22.707811 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.667) 0:00:09.930 ****** 2026-01-10 14:31:22.707818 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707825 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707860 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.707868 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707874 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707887 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707893 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707899 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.707905 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707911 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707929 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.707936 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707943 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707949 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.707955 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.707962 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:31:22.707968 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:31:22.707974 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.707981 | orchestrator | 2026-01-10 14:31:22.707987 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-10 14:31:22.707994 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.782) 0:00:10.713 ****** 2026-01-10 14:31:22.708000 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708007 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708013 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708020 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708026 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708033 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708039 | orchestrator | 2026-01-10 14:31:22.708045 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-10 14:31:22.708053 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.951) 0:00:11.664 ****** 2026-01-10 14:31:22.708059 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.708066 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.708073 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.708079 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.708086 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.708092 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.708098 | orchestrator | 2026-01-10 14:31:22.708105 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-10 14:31:22.708112 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:00.773) 0:00:12.437 ****** 2026-01-10 14:31:22.708118 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.708125 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.708131 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.708137 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.708143 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.708149 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.708156 | orchestrator | 2026-01-10 14:31:22.708162 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-10 14:31:22.708168 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:06.120) 0:00:18.558 ****** 2026-01-10 14:31:22.708175 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708181 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708187 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708193 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708200 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708206 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708212 | orchestrator | 2026-01-10 14:31:22.708219 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-10 14:31:22.708225 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:01.569) 0:00:20.128 ****** 2026-01-10 14:31:22.708237 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708243 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708250 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708256 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708262 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708269 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708275 | orchestrator | 2026-01-10 14:31:22.708282 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-10 14:31:22.708289 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:01.869) 0:00:21.997 ****** 2026-01-10 14:31:22.708296 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708302 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708309 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708354 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708364 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708370 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708377 | orchestrator | 2026-01-10 14:31:22.708383 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-10 14:31:22.708391 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.770) 0:00:22.768 ****** 2026-01-10 14:31:22.708397 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-10 14:31:22.708404 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-10 14:31:22.708410 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708417 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-10 14:31:22.708423 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-10 14:31:22.708429 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708435 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-10 14:31:22.708441 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-10 14:31:22.708448 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708454 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-10 14:31:22.708460 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-10 14:31:22.708467 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708473 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-10 14:31:22.708479 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-10 14:31:22.708486 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708492 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-10 14:31:22.708499 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-10 14:31:22.708505 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708511 | orchestrator | 2026-01-10 14:31:22.708518 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-10 14:31:22.708530 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:01.350) 0:00:24.118 ****** 2026-01-10 14:31:22.708537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708543 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708550 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708556 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708563 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708569 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708576 | orchestrator | 2026-01-10 14:31:22.708582 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-10 14:31:22.708588 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.680) 0:00:24.799 ****** 2026-01-10 14:31:22.708595 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.708601 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.708608 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.708614 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.708621 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.708632 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.708638 | orchestrator | 2026-01-10 14:31:22.708643 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-10 14:31:22.708662 | orchestrator | 2026-01-10 14:31:22.708670 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-10 14:31:22.708676 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:01.216) 0:00:26.015 ****** 2026-01-10 14:31:22.708683 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.708689 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.708695 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.708702 | orchestrator | 2026-01-10 14:31:22.708708 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-10 14:31:22.708715 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:01.957) 0:00:27.973 ****** 2026-01-10 14:31:22.708722 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.708729 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.708735 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.708742 | orchestrator | 2026-01-10 14:31:22.708748 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-10 14:31:22.708755 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:01.386) 0:00:29.360 ****** 2026-01-10 14:31:22.708762 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.708768 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.708774 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.708781 | orchestrator | 2026-01-10 14:31:22.708787 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-10 14:31:22.708794 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:01.196) 0:00:30.556 ****** 2026-01-10 14:31:22.708800 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.708806 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.709213 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.709239 | orchestrator | 2026-01-10 14:31:22.709246 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-10 14:31:22.709253 | orchestrator | Saturday 10 January 2026 14:27:29 +0000 (0:00:00.720) 0:00:31.277 ****** 2026-01-10 14:31:22.709259 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.709266 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709279 | orchestrator | 2026-01-10 14:31:22.709286 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-10 14:31:22.709292 | orchestrator | Saturday 10 January 2026 14:27:30 +0000 (0:00:00.518) 0:00:31.795 ****** 2026-01-10 14:31:22.709299 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.709305 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.709311 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709318 | orchestrator | 2026-01-10 14:31:22.709324 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-10 14:31:22.709330 | orchestrator | Saturday 10 January 2026 14:27:31 +0000 (0:00:01.297) 0:00:33.093 ****** 2026-01-10 14:31:22.709336 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.709343 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.709349 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709355 | orchestrator | 2026-01-10 14:31:22.709362 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-10 14:31:22.709369 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:01.371) 0:00:34.465 ****** 2026-01-10 14:31:22.709375 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:31:22.709383 | orchestrator | 2026-01-10 14:31:22.709389 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-10 14:31:22.709396 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:00.618) 0:00:35.084 ****** 2026-01-10 14:31:22.709402 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.709409 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.709415 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.709428 | orchestrator | 2026-01-10 14:31:22.709435 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-10 14:31:22.709442 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:01.716) 0:00:36.801 ****** 2026-01-10 14:31:22.709448 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709455 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709461 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709467 | orchestrator | 2026-01-10 14:31:22.709474 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-10 14:31:22.709482 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:00.713) 0:00:37.515 ****** 2026-01-10 14:31:22.709489 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709495 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709501 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709508 | orchestrator | 2026-01-10 14:31:22.709514 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-10 14:31:22.709520 | orchestrator | Saturday 10 January 2026 14:27:37 +0000 (0:00:01.364) 0:00:38.879 ****** 2026-01-10 14:31:22.709527 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709533 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709539 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709546 | orchestrator | 2026-01-10 14:31:22.709551 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-10 14:31:22.709567 | orchestrator | Saturday 10 January 2026 14:27:38 +0000 (0:00:01.600) 0:00:40.479 ****** 2026-01-10 14:31:22.709573 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.709580 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709586 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709593 | orchestrator | 2026-01-10 14:31:22.709600 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-10 14:31:22.709607 | orchestrator | Saturday 10 January 2026 14:27:39 +0000 (0:00:01.190) 0:00:41.670 ****** 2026-01-10 14:31:22.709613 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.709620 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709626 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709632 | orchestrator | 2026-01-10 14:31:22.709639 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-10 14:31:22.709645 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:00.399) 0:00:42.070 ****** 2026-01-10 14:31:22.709669 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709676 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.709685 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.709691 | orchestrator | 2026-01-10 14:31:22.709698 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-10 14:31:22.709704 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:02.008) 0:00:44.079 ****** 2026-01-10 14:31:22.709711 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.709718 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.709724 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.709730 | orchestrator | 2026-01-10 14:31:22.709737 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-10 14:31:22.709744 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:02.741) 0:00:46.820 ****** 2026-01-10 14:31:22.709750 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.709756 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.709763 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.709769 | orchestrator | 2026-01-10 14:31:22.709776 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-10 14:31:22.709782 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.493) 0:00:47.313 ****** 2026-01-10 14:31:22.709788 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:31:22.709795 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:31:22.709805 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:31:22.709811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:31:22.709816 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:31:22.709822 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:31:22.709827 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:31:22.709832 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:31:22.709838 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:31:22.709843 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:31:22.709849 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:31:22.709854 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:31:22.709860 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.709865 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.709871 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.709877 | orchestrator | 2026-01-10 14:31:22.709882 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-10 14:31:22.709889 | orchestrator | Saturday 10 January 2026 14:28:28 +0000 (0:00:43.375) 0:01:30.689 ****** 2026-01-10 14:31:22.709894 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.709900 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.709906 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.709911 | orchestrator | 2026-01-10 14:31:22.709917 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-10 14:31:22.709923 | orchestrator | Saturday 10 January 2026 14:28:29 +0000 (0:00:00.410) 0:01:31.100 ****** 2026-01-10 14:31:22.709929 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709935 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.709941 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.709947 | orchestrator | 2026-01-10 14:31:22.709952 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-10 14:31:22.709958 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:01.142) 0:01:32.243 ****** 2026-01-10 14:31:22.709964 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.709972 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.709979 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.709987 | orchestrator | 2026-01-10 14:31:22.710002 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-10 14:31:22.710008 | orchestrator | Saturday 10 January 2026 14:28:32 +0000 (0:00:01.708) 0:01:33.951 ****** 2026-01-10 14:31:22.710065 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710073 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710080 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710087 | orchestrator | 2026-01-10 14:31:22.710095 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-10 14:31:22.710102 | orchestrator | Saturday 10 January 2026 14:28:57 +0000 (0:00:25.512) 0:01:59.463 ****** 2026-01-10 14:31:22.710110 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710124 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710131 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710138 | orchestrator | 2026-01-10 14:31:22.710145 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-10 14:31:22.710152 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.841) 0:02:00.305 ****** 2026-01-10 14:31:22.710163 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710170 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710177 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710185 | orchestrator | 2026-01-10 14:31:22.710204 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-10 14:31:22.710212 | orchestrator | Saturday 10 January 2026 14:28:59 +0000 (0:00:00.714) 0:02:01.020 ****** 2026-01-10 14:31:22.710219 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710226 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710234 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710241 | orchestrator | 2026-01-10 14:31:22.710249 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-10 14:31:22.710256 | orchestrator | Saturday 10 January 2026 14:29:00 +0000 (0:00:00.896) 0:02:01.917 ****** 2026-01-10 14:31:22.710262 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710270 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710276 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710284 | orchestrator | 2026-01-10 14:31:22.710291 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-10 14:31:22.710298 | orchestrator | Saturday 10 January 2026 14:29:01 +0000 (0:00:01.556) 0:02:03.474 ****** 2026-01-10 14:31:22.710311 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710319 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710326 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710334 | orchestrator | 2026-01-10 14:31:22.710341 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-10 14:31:22.710349 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.316) 0:02:03.790 ****** 2026-01-10 14:31:22.710356 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710364 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710371 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710379 | orchestrator | 2026-01-10 14:31:22.710386 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-10 14:31:22.710394 | orchestrator | Saturday 10 January 2026 14:29:02 +0000 (0:00:00.615) 0:02:04.405 ****** 2026-01-10 14:31:22.710401 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710408 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710415 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710422 | orchestrator | 2026-01-10 14:31:22.710429 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-10 14:31:22.710436 | orchestrator | Saturday 10 January 2026 14:29:03 +0000 (0:00:00.692) 0:02:05.098 ****** 2026-01-10 14:31:22.710444 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710451 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710459 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710466 | orchestrator | 2026-01-10 14:31:22.710474 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-10 14:31:22.710481 | orchestrator | Saturday 10 January 2026 14:29:04 +0000 (0:00:01.285) 0:02:06.384 ****** 2026-01-10 14:31:22.710488 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:22.710496 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:22.710504 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:22.710512 | orchestrator | 2026-01-10 14:31:22.710519 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-10 14:31:22.710526 | orchestrator | Saturday 10 January 2026 14:29:05 +0000 (0:00:00.859) 0:02:07.243 ****** 2026-01-10 14:31:22.710533 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.710540 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.710548 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.710562 | orchestrator | 2026-01-10 14:31:22.710571 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-10 14:31:22.710578 | orchestrator | Saturday 10 January 2026 14:29:05 +0000 (0:00:00.288) 0:02:07.532 ****** 2026-01-10 14:31:22.710585 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.710593 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.710600 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.710607 | orchestrator | 2026-01-10 14:31:22.710614 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-10 14:31:22.710621 | orchestrator | Saturday 10 January 2026 14:29:06 +0000 (0:00:00.304) 0:02:07.836 ****** 2026-01-10 14:31:22.710627 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710634 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710642 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710681 | orchestrator | 2026-01-10 14:31:22.710691 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-10 14:31:22.710699 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.987) 0:02:08.824 ****** 2026-01-10 14:31:22.710707 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.710715 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.710722 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.710729 | orchestrator | 2026-01-10 14:31:22.710737 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-10 14:31:22.710745 | orchestrator | Saturday 10 January 2026 14:29:07 +0000 (0:00:00.613) 0:02:09.437 ****** 2026-01-10 14:31:22.710753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:31:22.710767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:31:22.710774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:31:22.710782 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:31:22.710790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:31:22.710797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:31:22.710805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:31:22.710813 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:31:22.710825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:31:22.710832 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-10 14:31:22.710840 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:31:22.710847 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:31:22.710854 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-10 14:31:22.710862 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:31:22.710869 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:31:22.710877 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:31:22.710885 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:31:22.710892 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:31:22.710899 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:31:22.710911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:31:22.710919 | orchestrator | 2026-01-10 14:31:22.710927 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-10 14:31:22.710935 | orchestrator | 2026-01-10 14:31:22.710943 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-10 14:31:22.710951 | orchestrator | Saturday 10 January 2026 14:29:10 +0000 (0:00:03.184) 0:02:12.621 ****** 2026-01-10 14:31:22.710958 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.710965 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.710972 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.710980 | orchestrator | 2026-01-10 14:31:22.710987 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-10 14:31:22.710995 | orchestrator | Saturday 10 January 2026 14:29:11 +0000 (0:00:00.518) 0:02:13.140 ****** 2026-01-10 14:31:22.711003 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.711011 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.711019 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.711026 | orchestrator | 2026-01-10 14:31:22.711033 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-10 14:31:22.711041 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.602) 0:02:13.742 ****** 2026-01-10 14:31:22.711048 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.711056 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.711063 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.711070 | orchestrator | 2026-01-10 14:31:22.711077 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-10 14:31:22.711083 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:00.323) 0:02:14.066 ****** 2026-01-10 14:31:22.711089 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:31:22.711095 | orchestrator | 2026-01-10 14:31:22.711131 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-10 14:31:22.711139 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.655) 0:02:14.721 ****** 2026-01-10 14:31:22.711146 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.711154 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.711162 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.711169 | orchestrator | 2026-01-10 14:31:22.711176 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-10 14:31:22.711184 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.309) 0:02:15.030 ****** 2026-01-10 14:31:22.711191 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.711199 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.711206 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.711214 | orchestrator | 2026-01-10 14:31:22.711220 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-10 14:31:22.711227 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.349) 0:02:15.380 ****** 2026-01-10 14:31:22.711235 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.711242 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.711250 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.711257 | orchestrator | 2026-01-10 14:31:22.711265 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-10 14:31:22.711272 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:00.304) 0:02:15.685 ****** 2026-01-10 14:31:22.711279 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.711286 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.711294 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.711301 | orchestrator | 2026-01-10 14:31:22.711316 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-10 14:31:22.711343 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.856) 0:02:16.541 ****** 2026-01-10 14:31:22.711352 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.711367 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.711389 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.711398 | orchestrator | 2026-01-10 14:31:22.711406 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-10 14:31:22.711413 | orchestrator | Saturday 10 January 2026 14:29:16 +0000 (0:00:01.191) 0:02:17.732 ****** 2026-01-10 14:31:22.711420 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.711427 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.711435 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.711442 | orchestrator | 2026-01-10 14:31:22.711450 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-10 14:31:22.711462 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:01.184) 0:02:18.916 ****** 2026-01-10 14:31:22.711470 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:22.711477 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:22.711484 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:22.711492 | orchestrator | 2026-01-10 14:31:22.711514 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:31:22.711523 | orchestrator | 2026-01-10 14:31:22.711529 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:31:22.711537 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:10.919) 0:02:29.836 ****** 2026-01-10 14:31:22.711544 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.711552 | orchestrator | 2026-01-10 14:31:22.711559 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:31:22.711567 | orchestrator | Saturday 10 January 2026 14:29:28 +0000 (0:00:00.816) 0:02:30.652 ****** 2026-01-10 14:31:22.711574 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711582 | orchestrator | 2026-01-10 14:31:22.711589 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:31:22.711596 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:00.450) 0:02:31.103 ****** 2026-01-10 14:31:22.711604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:31:22.711611 | orchestrator | 2026-01-10 14:31:22.711618 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:31:22.711626 | orchestrator | Saturday 10 January 2026 14:29:29 +0000 (0:00:00.583) 0:02:31.686 ****** 2026-01-10 14:31:22.711634 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711641 | orchestrator | 2026-01-10 14:31:22.711678 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:31:22.711688 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:01.091) 0:02:32.777 ****** 2026-01-10 14:31:22.711696 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711703 | orchestrator | 2026-01-10 14:31:22.711710 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:31:22.711718 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:00.606) 0:02:33.384 ****** 2026-01-10 14:31:22.711725 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:31:22.711733 | orchestrator | 2026-01-10 14:31:22.711740 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:31:22.711745 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:01.711) 0:02:35.096 ****** 2026-01-10 14:31:22.711750 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:31:22.711756 | orchestrator | 2026-01-10 14:31:22.711761 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:31:22.711767 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.816) 0:02:35.913 ****** 2026-01-10 14:31:22.711773 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711780 | orchestrator | 2026-01-10 14:31:22.711786 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:31:22.711791 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:00.665) 0:02:36.578 ****** 2026-01-10 14:31:22.711797 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711803 | orchestrator | 2026-01-10 14:31:22.711815 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-10 14:31:22.711822 | orchestrator | 2026-01-10 14:31:22.711829 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-10 14:31:22.711836 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.560) 0:02:37.138 ****** 2026-01-10 14:31:22.711842 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.711849 | orchestrator | 2026-01-10 14:31:22.711856 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-10 14:31:22.711863 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.125) 0:02:37.264 ****** 2026-01-10 14:31:22.711870 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:31:22.711877 | orchestrator | 2026-01-10 14:31:22.711884 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-10 14:31:22.711891 | orchestrator | Saturday 10 January 2026 14:29:35 +0000 (0:00:00.204) 0:02:37.469 ****** 2026-01-10 14:31:22.711897 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.711902 | orchestrator | 2026-01-10 14:31:22.711908 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-10 14:31:22.711915 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:01.568) 0:02:39.037 ****** 2026-01-10 14:31:22.711922 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.711928 | orchestrator | 2026-01-10 14:31:22.711935 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-10 14:31:22.711942 | orchestrator | Saturday 10 January 2026 14:29:38 +0000 (0:00:01.475) 0:02:40.513 ****** 2026-01-10 14:31:22.711949 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.711955 | orchestrator | 2026-01-10 14:31:22.711963 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-10 14:31:22.711997 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:00.763) 0:02:41.276 ****** 2026-01-10 14:31:22.712005 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.712012 | orchestrator | 2026-01-10 14:31:22.712025 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-10 14:31:22.712032 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:00:00.461) 0:02:41.738 ****** 2026-01-10 14:31:22.712038 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.712045 | orchestrator | 2026-01-10 14:31:22.712052 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-10 14:31:22.712059 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:08.615) 0:02:50.354 ****** 2026-01-10 14:31:22.712066 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.712073 | orchestrator | 2026-01-10 14:31:22.712079 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-10 14:31:22.712086 | orchestrator | Saturday 10 January 2026 14:30:00 +0000 (0:00:12.162) 0:03:02.516 ****** 2026-01-10 14:31:22.712093 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.712099 | orchestrator | 2026-01-10 14:31:22.712106 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-10 14:31:22.712113 | orchestrator | 2026-01-10 14:31:22.712125 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-10 14:31:22.712131 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.467) 0:03:02.984 ****** 2026-01-10 14:31:22.712138 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.712145 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.712152 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.712159 | orchestrator | 2026-01-10 14:31:22.712165 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-10 14:31:22.712172 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.288) 0:03:03.273 ****** 2026-01-10 14:31:22.712179 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712186 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.712193 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.712200 | orchestrator | 2026-01-10 14:31:22.712207 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-10 14:31:22.712223 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.313) 0:03:03.586 ****** 2026-01-10 14:31:22.712230 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:31:22.712236 | orchestrator | 2026-01-10 14:31:22.712243 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-10 14:31:22.712250 | orchestrator | Saturday 10 January 2026 14:30:02 +0000 (0:00:00.718) 0:03:04.305 ****** 2026-01-10 14:31:22.712257 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712264 | orchestrator | 2026-01-10 14:31:22.712271 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-10 14:31:22.712278 | orchestrator | Saturday 10 January 2026 14:30:03 +0000 (0:00:00.888) 0:03:05.193 ****** 2026-01-10 14:31:22.712285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712292 | orchestrator | 2026-01-10 14:31:22.712298 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-10 14:31:22.712305 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:00.868) 0:03:06.062 ****** 2026-01-10 14:31:22.712312 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712319 | orchestrator | 2026-01-10 14:31:22.712326 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-10 14:31:22.712333 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:00.137) 0:03:06.199 ****** 2026-01-10 14:31:22.712339 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712345 | orchestrator | 2026-01-10 14:31:22.712351 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-10 14:31:22.712358 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:01.046) 0:03:07.247 ****** 2026-01-10 14:31:22.712364 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712371 | orchestrator | 2026-01-10 14:31:22.712378 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-10 14:31:22.712385 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:00.135) 0:03:07.383 ****** 2026-01-10 14:31:22.712391 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712397 | orchestrator | 2026-01-10 14:31:22.712404 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-10 14:31:22.712411 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:00.135) 0:03:07.518 ****** 2026-01-10 14:31:22.712417 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712424 | orchestrator | 2026-01-10 14:31:22.712431 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-10 14:31:22.712438 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:00.117) 0:03:07.635 ****** 2026-01-10 14:31:22.712445 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712452 | orchestrator | 2026-01-10 14:31:22.712458 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-10 14:31:22.712465 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:00.112) 0:03:07.747 ****** 2026-01-10 14:31:22.712471 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712478 | orchestrator | 2026-01-10 14:31:22.712485 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-10 14:31:22.712492 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:04.696) 0:03:12.444 ****** 2026-01-10 14:31:22.712499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-10 14:31:22.712506 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-10 14:31:22.712513 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-10 14:31:22.712520 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-10 14:31:22.712527 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-10 14:31:22.712533 | orchestrator | 2026-01-10 14:31:22.712540 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-10 14:31:22.712551 | orchestrator | Saturday 10 January 2026 14:30:53 +0000 (0:00:42.387) 0:03:54.831 ****** 2026-01-10 14:31:22.712562 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712568 | orchestrator | 2026-01-10 14:31:22.712575 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-10 14:31:22.712582 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:01.274) 0:03:56.106 ****** 2026-01-10 14:31:22.712589 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712596 | orchestrator | 2026-01-10 14:31:22.712602 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-10 14:31:22.712609 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:01.559) 0:03:57.666 ****** 2026-01-10 14:31:22.712616 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:31:22.712623 | orchestrator | 2026-01-10 14:31:22.712629 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-10 14:31:22.712636 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:01.107) 0:03:58.773 ****** 2026-01-10 14:31:22.712644 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712661 | orchestrator | 2026-01-10 14:31:22.712672 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-10 14:31:22.712679 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:00.144) 0:03:58.917 ****** 2026-01-10 14:31:22.712686 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-10 14:31:22.712693 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-10 14:31:22.712699 | orchestrator | 2026-01-10 14:31:22.712706 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-10 14:31:22.712713 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:01.748) 0:04:00.665 ****** 2026-01-10 14:31:22.712720 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.712726 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.712733 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.712740 | orchestrator | 2026-01-10 14:31:22.712746 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-10 14:31:22.712753 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.372) 0:04:01.038 ****** 2026-01-10 14:31:22.712760 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.712766 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.712773 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.712780 | orchestrator | 2026-01-10 14:31:22.712787 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-10 14:31:22.712793 | orchestrator | 2026-01-10 14:31:22.712800 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-10 14:31:22.712807 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:01.362) 0:04:02.401 ****** 2026-01-10 14:31:22.712813 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:22.712819 | orchestrator | 2026-01-10 14:31:22.712826 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-10 14:31:22.712833 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:00.128) 0:04:02.529 ****** 2026-01-10 14:31:22.712840 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:31:22.712847 | orchestrator | 2026-01-10 14:31:22.712854 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-10 14:31:22.712861 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:00.187) 0:04:02.717 ****** 2026-01-10 14:31:22.712868 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:22.712874 | orchestrator | 2026-01-10 14:31:22.712881 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-10 14:31:22.712888 | orchestrator | 2026-01-10 14:31:22.712895 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-10 14:31:22.712901 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:05.531) 0:04:08.249 ****** 2026-01-10 14:31:22.712910 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:22.712916 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:22.712923 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:22.712930 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:22.712936 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:22.712942 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:22.712949 | orchestrator | 2026-01-10 14:31:22.712956 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-10 14:31:22.712962 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.953) 0:04:09.202 ****** 2026-01-10 14:31:22.712969 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:31:22.712975 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:31:22.712982 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:31:22.712989 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:31:22.712996 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:31:22.713002 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:31:22.713009 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:31:22.713016 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:31:22.713023 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:31:22.713030 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:31:22.713036 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:31:22.713043 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:31:22.713055 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:31:22.713062 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:31:22.713069 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:31:22.713075 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:31:22.713082 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:31:22.713089 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:31:22.713096 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:31:22.713102 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:31:22.713112 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:31:22.713119 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:31:22.713125 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:31:22.713132 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:31:22.713138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:31:22.713145 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:31:22.713170 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:31:22.713176 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:31:22.713183 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:31:22.713196 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:31:22.713203 | orchestrator | 2026-01-10 14:31:22.713209 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-10 14:31:22.713216 | orchestrator | Saturday 10 January 2026 14:31:20 +0000 (0:00:13.040) 0:04:22.243 ****** 2026-01-10 14:31:22.713223 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.713229 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.713236 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.713242 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.713248 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.713254 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.713260 | orchestrator | 2026-01-10 14:31:22.713266 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-10 14:31:22.713272 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:00.604) 0:04:22.848 ****** 2026-01-10 14:31:22.713278 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:31:22.713284 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:31:22.713289 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:31:22.713295 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:22.713301 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:22.713307 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:22.713314 | orchestrator | 2026-01-10 14:31:22.713320 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:22.713326 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:22.713331 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:31:22.713335 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:31:22.713339 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:31:22.713343 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:31:22.713347 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:31:22.713350 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:31:22.713354 | orchestrator | 2026-01-10 14:31:22.713358 | orchestrator | 2026-01-10 14:31:22.713362 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:22.713365 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:00.401) 0:04:23.250 ****** 2026-01-10 14:31:22.713369 | orchestrator | =============================================================================== 2026-01-10 14:31:22.713373 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.38s 2026-01-10 14:31:22.713377 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.39s 2026-01-10 14:31:22.713380 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.51s 2026-01-10 14:31:22.713388 | orchestrator | Manage labels ---------------------------------------------------------- 13.04s 2026-01-10 14:31:22.713392 | orchestrator | kubectl : Install required packages ------------------------------------ 12.16s 2026-01-10 14:31:22.713395 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.92s 2026-01-10 14:31:22.713399 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.62s 2026-01-10 14:31:22.713407 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.12s 2026-01-10 14:31:22.713410 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.53s 2026-01-10 14:31:22.713414 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.70s 2026-01-10 14:31:22.713418 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.18s 2026-01-10 14:31:22.713425 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.75s 2026-01-10 14:31:22.713429 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.74s 2026-01-10 14:31:22.713433 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.01s 2026-01-10 14:31:22.713436 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.01s 2026-01-10 14:31:22.713440 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.96s 2026-01-10 14:31:22.713444 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.87s 2026-01-10 14:31:22.713447 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.75s 2026-01-10 14:31:22.713451 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.72s 2026-01-10 14:31:22.713455 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.71s 2026-01-10 14:31:22.713459 | orchestrator | 2026-01-10 14:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:25.756638 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:25.756955 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:25.757440 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:25.758182 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 8c1be150-6c74-4826-aa55-fb3c2f565dca is in state STARTED 2026-01-10 14:31:25.758681 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:25.759435 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 2cd8ef68-4cff-4ced-88f1-319c07591c71 is in state STARTED 2026-01-10 14:31:25.759478 | orchestrator | 2026-01-10 14:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:28.819577 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:28.821214 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:28.822078 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:28.823270 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 8c1be150-6c74-4826-aa55-fb3c2f565dca is in state STARTED 2026-01-10 14:31:28.824877 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:28.826455 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 2cd8ef68-4cff-4ced-88f1-319c07591c71 is in state STARTED 2026-01-10 14:31:28.826491 | orchestrator | 2026-01-10 14:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:31.855773 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:31.856037 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:31.859374 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:31.861343 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 8c1be150-6c74-4826-aa55-fb3c2f565dca is in state STARTED 2026-01-10 14:31:31.861971 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:31.862448 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 2cd8ef68-4cff-4ced-88f1-319c07591c71 is in state SUCCESS 2026-01-10 14:31:31.862797 | orchestrator | 2026-01-10 14:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:34.924898 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:34.925004 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:34.925948 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:34.926178 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task 8c1be150-6c74-4826-aa55-fb3c2f565dca is in state SUCCESS 2026-01-10 14:31:34.935127 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:34.935211 | orchestrator | 2026-01-10 14:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:37.949866 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:37.950178 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:37.950744 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:37.959161 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:37.959283 | orchestrator | 2026-01-10 14:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:41.007288 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:41.007995 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:41.010764 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:41.013757 | orchestrator | 2026-01-10 14:31:41 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:41.013798 | orchestrator | 2026-01-10 14:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:44.074891 | orchestrator | 2026-01-10 14:31:44 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:44.075768 | orchestrator | 2026-01-10 14:31:44 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:44.077452 | orchestrator | 2026-01-10 14:31:44 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:44.078377 | orchestrator | 2026-01-10 14:31:44 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:44.079445 | orchestrator | 2026-01-10 14:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:47.125400 | orchestrator | 2026-01-10 14:31:47 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state STARTED 2026-01-10 14:31:47.125582 | orchestrator | 2026-01-10 14:31:47 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:47.126612 | orchestrator | 2026-01-10 14:31:47 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:47.127342 | orchestrator | 2026-01-10 14:31:47 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:47.127376 | orchestrator | 2026-01-10 14:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:50.165658 | orchestrator | 2026-01-10 14:31:50 | INFO  | Task e4c1b421-0ae7-47c2-989c-f91be1cf4488 is in state SUCCESS 2026-01-10 14:31:50.166338 | orchestrator | 2026-01-10 14:31:50.166361 | orchestrator | 2026-01-10 14:31:50.166366 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-10 14:31:50.166371 | orchestrator | 2026-01-10 14:31:50.166375 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:31:50.166379 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:00.190) 0:00:00.190 ****** 2026-01-10 14:31:50.166384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:31:50.166387 | orchestrator | 2026-01-10 14:31:50.166391 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:31:50.166395 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:00.930) 0:00:01.120 ****** 2026-01-10 14:31:50.166399 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:50.166403 | orchestrator | 2026-01-10 14:31:50.166407 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-10 14:31:50.166411 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:01.058) 0:00:02.179 ****** 2026-01-10 14:31:50.166414 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:50.166418 | orchestrator | 2026-01-10 14:31:50.166422 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:50.166426 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:50.166431 | orchestrator | 2026-01-10 14:31:50.166435 | orchestrator | 2026-01-10 14:31:50.166438 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:50.166442 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:00.521) 0:00:02.700 ****** 2026-01-10 14:31:50.166446 | orchestrator | =============================================================================== 2026-01-10 14:31:50.166450 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2026-01-10 14:31:50.166453 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2026-01-10 14:31:50.166457 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.52s 2026-01-10 14:31:50.166461 | orchestrator | 2026-01-10 14:31:50.166465 | orchestrator | 2026-01-10 14:31:50.166468 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:31:50.166472 | orchestrator | 2026-01-10 14:31:50.166476 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:31:50.166480 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:00.125) 0:00:00.125 ****** 2026-01-10 14:31:50.166483 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:50.166488 | orchestrator | 2026-01-10 14:31:50.166492 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:31:50.166503 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:00.590) 0:00:00.715 ****** 2026-01-10 14:31:50.166507 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:50.166511 | orchestrator | 2026-01-10 14:31:50.166515 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:31:50.166519 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:00.777) 0:00:01.493 ****** 2026-01-10 14:31:50.166522 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:31:50.166526 | orchestrator | 2026-01-10 14:31:50.166530 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:31:50.166534 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:00.571) 0:00:02.064 ****** 2026-01-10 14:31:50.166538 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:50.166542 | orchestrator | 2026-01-10 14:31:50.166555 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:31:50.166559 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:01.375) 0:00:03.440 ****** 2026-01-10 14:31:50.166563 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:50.166567 | orchestrator | 2026-01-10 14:31:50.166570 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:31:50.166574 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:00.497) 0:00:03.937 ****** 2026-01-10 14:31:50.166578 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:31:50.166582 | orchestrator | 2026-01-10 14:31:50.166585 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:31:50.166589 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:01.521) 0:00:05.458 ****** 2026-01-10 14:31:50.166593 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:31:50.166597 | orchestrator | 2026-01-10 14:31:50.166600 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:31:50.166604 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.764) 0:00:06.223 ****** 2026-01-10 14:31:50.166608 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:50.166612 | orchestrator | 2026-01-10 14:31:50.166615 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:31:50.166622 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.479) 0:00:06.702 ****** 2026-01-10 14:31:50.166630 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:50.166639 | orchestrator | 2026-01-10 14:31:50.166645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:50.166651 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:50.166657 | orchestrator | 2026-01-10 14:31:50.166663 | orchestrator | 2026-01-10 14:31:50.166669 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:50.166675 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.293) 0:00:06.995 ****** 2026-01-10 14:31:50.166680 | orchestrator | =============================================================================== 2026-01-10 14:31:50.166686 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2026-01-10 14:31:50.166692 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.38s 2026-01-10 14:31:50.166710 | orchestrator | Create .kube directory -------------------------------------------------- 0.78s 2026-01-10 14:31:50.166723 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2026-01-10 14:31:50.166730 | orchestrator | Get home directory of operator user ------------------------------------- 0.59s 2026-01-10 14:31:50.166736 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.57s 2026-01-10 14:31:50.166742 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2026-01-10 14:31:50.166748 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.48s 2026-01-10 14:31:50.166755 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-01-10 14:31:50.166761 | orchestrator | 2026-01-10 14:31:50.166776 | orchestrator | 2026-01-10 14:31:50.166788 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-10 14:31:50.166795 | orchestrator | 2026-01-10 14:31:50.166800 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:31:50.166803 | orchestrator | Saturday 10 January 2026 14:29:32 +0000 (0:00:00.148) 0:00:00.148 ****** 2026-01-10 14:31:50.166807 | orchestrator | ok: [localhost] => { 2026-01-10 14:31:50.166812 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-10 14:31:50.166816 | orchestrator | } 2026-01-10 14:31:50.166820 | orchestrator | 2026-01-10 14:31:50.166824 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-10 14:31:50.166832 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:00.101) 0:00:00.250 ****** 2026-01-10 14:31:50.166836 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-10 14:31:50.166841 | orchestrator | ...ignoring 2026-01-10 14:31:50.166845 | orchestrator | 2026-01-10 14:31:50.166849 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-10 14:31:50.166853 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:03.125) 0:00:03.375 ****** 2026-01-10 14:31:50.166857 | orchestrator | skipping: [localhost] 2026-01-10 14:31:50.166861 | orchestrator | 2026-01-10 14:31:50.166864 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-10 14:31:50.166868 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.086) 0:00:03.462 ****** 2026-01-10 14:31:50.166872 | orchestrator | ok: [localhost] 2026-01-10 14:31:50.166876 | orchestrator | 2026-01-10 14:31:50.166879 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:31:50.166883 | orchestrator | 2026-01-10 14:31:50.166887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:31:50.166894 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.245) 0:00:03.707 ****** 2026-01-10 14:31:50.166898 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:50.166902 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:50.166906 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:50.166909 | orchestrator | 2026-01-10 14:31:50.166913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:31:50.166917 | orchestrator | Saturday 10 January 2026 14:29:36 +0000 (0:00:00.426) 0:00:04.134 ****** 2026-01-10 14:31:50.166921 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-10 14:31:50.166924 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-10 14:31:50.166928 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-10 14:31:50.166932 | orchestrator | 2026-01-10 14:31:50.166952 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-10 14:31:50.166957 | orchestrator | 2026-01-10 14:31:50.166961 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:31:50.166965 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.606) 0:00:04.741 ****** 2026-01-10 14:31:50.166970 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:31:50.166974 | orchestrator | 2026-01-10 14:31:50.166978 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:31:50.166983 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:00.490) 0:00:05.231 ****** 2026-01-10 14:31:50.166987 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:50.166991 | orchestrator | 2026-01-10 14:31:50.166996 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-10 14:31:50.167000 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:01.030) 0:00:06.262 ****** 2026-01-10 14:31:50.167004 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167009 | orchestrator | 2026-01-10 14:31:50.167013 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-10 14:31:50.167018 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:00.384) 0:00:06.646 ****** 2026-01-10 14:31:50.167022 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167026 | orchestrator | 2026-01-10 14:31:50.167030 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-10 14:31:50.167034 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:00.350) 0:00:06.997 ****** 2026-01-10 14:31:50.167039 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167043 | orchestrator | 2026-01-10 14:31:50.167047 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-10 14:31:50.167052 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:00:00.371) 0:00:07.369 ****** 2026-01-10 14:31:50.167059 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167063 | orchestrator | 2026-01-10 14:31:50.167067 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:31:50.167072 | orchestrator | Saturday 10 January 2026 14:29:41 +0000 (0:00:01.161) 0:00:08.530 ****** 2026-01-10 14:31:50.167076 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-10 14:31:50.167080 | orchestrator | 2026-01-10 14:31:50.167084 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:31:50.167092 | orchestrator | Saturday 10 January 2026 14:29:42 +0000 (0:00:01.082) 0:00:09.613 ****** 2026-01-10 14:31:50.167097 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:50.167101 | orchestrator | 2026-01-10 14:31:50.167105 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-10 14:31:50.167110 | orchestrator | Saturday 10 January 2026 14:29:43 +0000 (0:00:00.940) 0:00:10.554 ****** 2026-01-10 14:31:50.167114 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167118 | orchestrator | 2026-01-10 14:31:50.167123 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-10 14:31:50.167127 | orchestrator | Saturday 10 January 2026 14:29:43 +0000 (0:00:00.572) 0:00:11.127 ****** 2026-01-10 14:31:50.167131 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167136 | orchestrator | 2026-01-10 14:31:50.167140 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-10 14:31:50.167144 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:00.589) 0:00:11.716 ****** 2026-01-10 14:31:50.167151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167170 | orchestrator | 2026-01-10 14:31:50.167175 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-10 14:31:50.167179 | orchestrator | Saturday 10 January 2026 14:29:46 +0000 (0:00:02.171) 0:00:13.887 ****** 2026-01-10 14:31:50.167187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167247 | orchestrator | 2026-01-10 14:31:50.167251 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-10 14:31:50.167255 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:03.047) 0:00:16.935 ****** 2026-01-10 14:31:50.167260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:31:50.167264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:31:50.167268 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:31:50.167271 | orchestrator | 2026-01-10 14:31:50.167275 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-10 14:31:50.167279 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:02.223) 0:00:19.158 ****** 2026-01-10 14:31:50.167283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:31:50.167286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:31:50.167290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:31:50.167294 | orchestrator | 2026-01-10 14:31:50.167297 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-10 14:31:50.167304 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:02.331) 0:00:21.489 ****** 2026-01-10 14:31:50.167308 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:31:50.167312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:31:50.167315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:31:50.167319 | orchestrator | 2026-01-10 14:31:50.167323 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-10 14:31:50.167326 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:01.522) 0:00:23.012 ****** 2026-01-10 14:31:50.167330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:31:50.167334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:31:50.167337 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:31:50.167341 | orchestrator | 2026-01-10 14:31:50.167345 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-10 14:31:50.167349 | orchestrator | Saturday 10 January 2026 14:29:58 +0000 (0:00:02.590) 0:00:25.603 ****** 2026-01-10 14:31:50.167352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:31:50.167356 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:31:50.167360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:31:50.167363 | orchestrator | 2026-01-10 14:31:50.167367 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-10 14:31:50.167371 | orchestrator | Saturday 10 January 2026 14:30:00 +0000 (0:00:01.994) 0:00:27.598 ****** 2026-01-10 14:31:50.167374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:31:50.167378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:31:50.167382 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:31:50.167386 | orchestrator | 2026-01-10 14:31:50.167392 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:31:50.167395 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:01.445) 0:00:29.043 ****** 2026-01-10 14:31:50.167401 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:50.167405 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167409 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:50.167412 | orchestrator | 2026-01-10 14:31:50.167416 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-10 14:31:50.167420 | orchestrator | Saturday 10 January 2026 14:30:02 +0000 (0:00:00.650) 0:00:29.694 ****** 2026-01-10 14:31:50.167424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:31:50.167439 | orchestrator | 2026-01-10 14:31:50.167443 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-10 14:31:50.167446 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:01.623) 0:00:31.317 ****** 2026-01-10 14:31:50.167453 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:50.167456 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:50.167460 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:50.167464 | orchestrator | 2026-01-10 14:31:50.167467 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-10 14:31:50.167471 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:00.884) 0:00:32.202 ****** 2026-01-10 14:31:50.167475 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:50.167479 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:50.167482 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:50.167486 | orchestrator | 2026-01-10 14:31:50.167490 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-10 14:31:50.167495 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:07.268) 0:00:39.470 ****** 2026-01-10 14:31:50.167499 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:50.167503 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:50.167506 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:50.167510 | orchestrator | 2026-01-10 14:31:50.167514 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:31:50.167518 | orchestrator | 2026-01-10 14:31:50.167521 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:31:50.167525 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:00.431) 0:00:39.901 ****** 2026-01-10 14:31:50.167529 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:50.167532 | orchestrator | 2026-01-10 14:31:50.167536 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:31:50.167540 | orchestrator | Saturday 10 January 2026 14:30:13 +0000 (0:00:00.604) 0:00:40.506 ****** 2026-01-10 14:31:50.167544 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:31:50.167547 | orchestrator | 2026-01-10 14:31:50.167551 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:31:50.167555 | orchestrator | Saturday 10 January 2026 14:30:13 +0000 (0:00:00.217) 0:00:40.723 ****** 2026-01-10 14:31:50.167559 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:50.167562 | orchestrator | 2026-01-10 14:31:50.167566 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:31:50.167570 | orchestrator | Saturday 10 January 2026 14:30:15 +0000 (0:00:02.016) 0:00:42.740 ****** 2026-01-10 14:31:50.167573 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:50.167577 | orchestrator | 2026-01-10 14:31:50.167581 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:31:50.167585 | orchestrator | 2026-01-10 14:31:50.167588 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:31:50.167592 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:56.739) 0:01:39.479 ****** 2026-01-10 14:31:50.167596 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:50.167599 | orchestrator | 2026-01-10 14:31:50.167603 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:31:50.167607 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:00.782) 0:01:40.262 ****** 2026-01-10 14:31:50.167610 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:31:50.167614 | orchestrator | 2026-01-10 14:31:50.167618 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:31:50.167622 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:00.529) 0:01:40.791 ****** 2026-01-10 14:31:50.167625 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:50.167629 | orchestrator | 2026-01-10 14:31:50.167633 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:31:50.167636 | orchestrator | Saturday 10 January 2026 14:31:20 +0000 (0:00:06.874) 0:01:47.666 ****** 2026-01-10 14:31:50.167640 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:50.167644 | orchestrator | 2026-01-10 14:31:50.167648 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:31:50.167654 | orchestrator | 2026-01-10 14:31:50.167657 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:31:50.167661 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:09.874) 0:01:57.540 ****** 2026-01-10 14:31:50.167665 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:50.167668 | orchestrator | 2026-01-10 14:31:50.167676 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:31:50.167682 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:00.612) 0:01:58.153 ****** 2026-01-10 14:31:50.167689 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:31:50.167708 | orchestrator | 2026-01-10 14:31:50.167714 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:31:50.167720 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:00.533) 0:01:58.687 ****** 2026-01-10 14:31:50.167725 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:50.167766 | orchestrator | 2026-01-10 14:31:50.167773 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:31:50.167779 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:01.953) 0:02:00.641 ****** 2026-01-10 14:31:50.167785 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:50.167790 | orchestrator | 2026-01-10 14:31:50.167796 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-10 14:31:50.167802 | orchestrator | 2026-01-10 14:31:50.167809 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-10 14:31:50.167816 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:13.353) 0:02:13.994 ****** 2026-01-10 14:31:50.167820 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:31:50.167824 | orchestrator | 2026-01-10 14:31:50.167828 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-10 14:31:50.167832 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:00.564) 0:02:14.558 ****** 2026-01-10 14:31:50.167835 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 14:31:50.167839 | orchestrator | enable_outward_rabbitmq_True 2026-01-10 14:31:50.167843 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 14:31:50.167846 | orchestrator | outward_rabbitmq_restart 2026-01-10 14:31:50.167850 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:50.167854 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:50.167858 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:50.167861 | orchestrator | 2026-01-10 14:31:50.167865 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-10 14:31:50.167869 | orchestrator | skipping: no hosts matched 2026-01-10 14:31:50.167873 | orchestrator | 2026-01-10 14:31:50.167876 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-10 14:31:50.167880 | orchestrator | skipping: no hosts matched 2026-01-10 14:31:50.167884 | orchestrator | 2026-01-10 14:31:50.167887 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-10 14:31:50.167891 | orchestrator | skipping: no hosts matched 2026-01-10 14:31:50.167895 | orchestrator | 2026-01-10 14:31:50.167898 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:50.167906 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:31:50.167913 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:31:50.167941 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:31:50.167948 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:31:50.167954 | orchestrator | 2026-01-10 14:31:50.167965 | orchestrator | 2026-01-10 14:31:50.167973 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:50.167977 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:02.316) 0:02:16.874 ****** 2026-01-10 14:31:50.167981 | orchestrator | =============================================================================== 2026-01-10 14:31:50.167985 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.97s 2026-01-10 14:31:50.167988 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.84s 2026-01-10 14:31:50.167992 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.27s 2026-01-10 14:31:50.167996 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.13s 2026-01-10 14:31:50.167999 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.05s 2026-01-10 14:31:50.168003 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.59s 2026-01-10 14:31:50.168007 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.33s 2026-01-10 14:31:50.168011 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.32s 2026-01-10 14:31:50.168014 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.22s 2026-01-10 14:31:50.168018 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.17s 2026-01-10 14:31:50.168022 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.00s 2026-01-10 14:31:50.168025 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.99s 2026-01-10 14:31:50.168029 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.62s 2026-01-10 14:31:50.168033 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.53s 2026-01-10 14:31:50.168036 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.45s 2026-01-10 14:31:50.168040 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.28s 2026-01-10 14:31:50.168044 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.16s 2026-01-10 14:31:50.168052 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.08s 2026-01-10 14:31:50.168056 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2026-01-10 14:31:50.168060 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.94s 2026-01-10 14:31:50.168796 | orchestrator | 2026-01-10 14:31:50 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:50.171100 | orchestrator | 2026-01-10 14:31:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:50.173218 | orchestrator | 2026-01-10 14:31:50 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:50.173326 | orchestrator | 2026-01-10 14:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:53.221529 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:53.222887 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:53.224774 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:53.225024 | orchestrator | 2026-01-10 14:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:56.276951 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:56.277449 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:56.278298 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:56.278367 | orchestrator | 2026-01-10 14:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:59.328954 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:31:59.329520 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:31:59.331505 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:31:59.331542 | orchestrator | 2026-01-10 14:31:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:02.400136 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:02.403273 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:02.404983 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:02.405017 | orchestrator | 2026-01-10 14:32:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:05.443182 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:05.445128 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:05.447449 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:05.447521 | orchestrator | 2026-01-10 14:32:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:08.490314 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:08.491820 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:08.494766 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:08.494873 | orchestrator | 2026-01-10 14:32:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:11.536394 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:11.537250 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:11.538433 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:11.538470 | orchestrator | 2026-01-10 14:32:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:14.584327 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:14.586051 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:14.587927 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:14.588081 | orchestrator | 2026-01-10 14:32:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:17.624311 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:17.627588 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:17.629690 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:17.629724 | orchestrator | 2026-01-10 14:32:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:20.675864 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:20.678105 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:20.680187 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:20.680350 | orchestrator | 2026-01-10 14:32:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:23.716472 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:23.717907 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:23.719720 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:23.719840 | orchestrator | 2026-01-10 14:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:26.761342 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:26.763508 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:26.764269 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:26.764512 | orchestrator | 2026-01-10 14:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:29.804083 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:29.804188 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:29.806118 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:29.806280 | orchestrator | 2026-01-10 14:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:32.853249 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:32.856366 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:32.857914 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:32.857972 | orchestrator | 2026-01-10 14:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:35.899724 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:35.900839 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:35.902873 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:35.902916 | orchestrator | 2026-01-10 14:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:38.955398 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:38.959160 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:38.963042 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:38.963180 | orchestrator | 2026-01-10 14:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:42.001931 | orchestrator | 2026-01-10 14:32:42 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:42.014621 | orchestrator | 2026-01-10 14:32:42 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:42.017806 | orchestrator | 2026-01-10 14:32:42 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:42.017893 | orchestrator | 2026-01-10 14:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:45.061447 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:45.061834 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:45.062363 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state STARTED 2026-01-10 14:32:45.062470 | orchestrator | 2026-01-10 14:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:48.092575 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:48.094633 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:48.097709 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 72dbdbac-8632-49cf-9f09-81a4108592f3 is in state SUCCESS 2026-01-10 14:32:48.098987 | orchestrator | 2026-01-10 14:32:48.099029 | orchestrator | 2026-01-10 14:32:48.099037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:32:48.099044 | orchestrator | 2026-01-10 14:32:48.099049 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:32:48.099056 | orchestrator | Saturday 10 January 2026 14:30:19 +0000 (0:00:00.183) 0:00:00.183 ****** 2026-01-10 14:32:48.099061 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:48.099067 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:48.099072 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:48.099077 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.099082 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.099087 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.099093 | orchestrator | 2026-01-10 14:32:48.099098 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:32:48.099104 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:00.667) 0:00:00.851 ****** 2026-01-10 14:32:48.099108 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-10 14:32:48.099112 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-10 14:32:48.099115 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-10 14:32:48.099126 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-10 14:32:48.099129 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-10 14:32:48.099133 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-10 14:32:48.099136 | orchestrator | 2026-01-10 14:32:48.099139 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-10 14:32:48.099143 | orchestrator | 2026-01-10 14:32:48.099159 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-10 14:32:48.099165 | orchestrator | Saturday 10 January 2026 14:30:21 +0000 (0:00:01.043) 0:00:01.894 ****** 2026-01-10 14:32:48.099171 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:32:48.099177 | orchestrator | 2026-01-10 14:32:48.099180 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-10 14:32:48.099186 | orchestrator | Saturday 10 January 2026 14:30:22 +0000 (0:00:01.223) 0:00:03.117 ****** 2026-01-10 14:32:48.099193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099211 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099228 | orchestrator | 2026-01-10 14:32:48.099243 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-10 14:32:48.099247 | orchestrator | Saturday 10 January 2026 14:30:23 +0000 (0:00:01.112) 0:00:04.230 ****** 2026-01-10 14:32:48.099250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099278 | orchestrator | 2026-01-10 14:32:48.099281 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-10 14:32:48.099284 | orchestrator | Saturday 10 January 2026 14:30:25 +0000 (0:00:01.652) 0:00:05.882 ****** 2026-01-10 14:32:48.099287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099362 | orchestrator | 2026-01-10 14:32:48.099365 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-10 14:32:48.099368 | orchestrator | Saturday 10 January 2026 14:30:27 +0000 (0:00:01.919) 0:00:07.802 ****** 2026-01-10 14:32:48.099371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099412 | orchestrator | 2026-01-10 14:32:48.099418 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-10 14:32:48.099421 | orchestrator | Saturday 10 January 2026 14:30:29 +0000 (0:00:02.236) 0:00:10.038 ****** 2026-01-10 14:32:48.099424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.099482 | orchestrator | 2026-01-10 14:32:48.099485 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-10 14:32:48.099488 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:01.596) 0:00:11.634 ****** 2026-01-10 14:32:48.099491 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:48.099495 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:48.099498 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:48.099501 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.099504 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.099507 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.099510 | orchestrator | 2026-01-10 14:32:48.099513 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-10 14:32:48.099516 | orchestrator | Saturday 10 January 2026 14:30:33 +0000 (0:00:02.861) 0:00:14.496 ****** 2026-01-10 14:32:48.099520 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-10 14:32:48.099523 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-10 14:32:48.099526 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-10 14:32:48.099530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-10 14:32:48.099533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-10 14:32:48.099536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-10 14:32:48.099539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099542 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099550 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:32:48.099563 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099579 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-10 14:32:48.099585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099589 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099592 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:32:48.099607 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099621 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:32:48.099628 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099632 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099639 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099643 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099647 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:32:48.099650 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:32:48.099654 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:32:48.099660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:32:48.099664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:32:48.099667 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-10 14:32:48.099671 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:32:48.099675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:32:48.099678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-10 14:32:48.099684 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-10 14:32:48.099688 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-10 14:32:48.099691 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:32:48.099695 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-10 14:32:48.099699 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-10 14:32:48.099702 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:32:48.099706 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:32:48.099711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:32:48.099715 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:32:48.099719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:32:48.099722 | orchestrator | 2026-01-10 14:32:48.099726 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099730 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:21.911) 0:00:36.407 ****** 2026-01-10 14:32:48.099733 | orchestrator | 2026-01-10 14:32:48.099737 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099741 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:00.136) 0:00:36.544 ****** 2026-01-10 14:32:48.099744 | orchestrator | 2026-01-10 14:32:48.099748 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099751 | orchestrator | Saturday 10 January 2026 14:30:56 +0000 (0:00:00.123) 0:00:36.668 ****** 2026-01-10 14:32:48.099755 | orchestrator | 2026-01-10 14:32:48.099759 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099762 | orchestrator | Saturday 10 January 2026 14:30:56 +0000 (0:00:00.070) 0:00:36.738 ****** 2026-01-10 14:32:48.099766 | orchestrator | 2026-01-10 14:32:48.099770 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099773 | orchestrator | Saturday 10 January 2026 14:30:56 +0000 (0:00:00.083) 0:00:36.821 ****** 2026-01-10 14:32:48.099800 | orchestrator | 2026-01-10 14:32:48.099804 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:32:48.099808 | orchestrator | Saturday 10 January 2026 14:30:56 +0000 (0:00:00.072) 0:00:36.894 ****** 2026-01-10 14:32:48.099814 | orchestrator | 2026-01-10 14:32:48.099817 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-10 14:32:48.099821 | orchestrator | Saturday 10 January 2026 14:30:56 +0000 (0:00:00.069) 0:00:36.964 ****** 2026-01-10 14:32:48.099824 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:48.099828 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:48.099831 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.099835 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:48.099838 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.099842 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.099845 | orchestrator | 2026-01-10 14:32:48.099849 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-10 14:32:48.099852 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:01.795) 0:00:38.759 ****** 2026-01-10 14:32:48.099856 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.099859 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.099863 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:48.099867 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:48.099870 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:48.099874 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.099877 | orchestrator | 2026-01-10 14:32:48.099881 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-10 14:32:48.099884 | orchestrator | 2026-01-10 14:32:48.099888 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:32:48.099891 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:28.315) 0:01:07.075 ****** 2026-01-10 14:32:48.099895 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:32:48.099900 | orchestrator | 2026-01-10 14:32:48.099906 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:32:48.099911 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:01.226) 0:01:08.302 ****** 2026-01-10 14:32:48.099916 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:32:48.099922 | orchestrator | 2026-01-10 14:32:48.099927 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-10 14:32:48.099933 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:00.517) 0:01:08.819 ****** 2026-01-10 14:32:48.099938 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.099944 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.099949 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.099955 | orchestrator | 2026-01-10 14:32:48.099960 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-10 14:32:48.099966 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:00.908) 0:01:09.728 ****** 2026-01-10 14:32:48.099969 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.099975 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.099980 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.099988 | orchestrator | 2026-01-10 14:32:48.099994 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-10 14:32:48.100000 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:00.978) 0:01:10.706 ****** 2026-01-10 14:32:48.100005 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.100009 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.100012 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.100015 | orchestrator | 2026-01-10 14:32:48.100019 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-10 14:32:48.100022 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:00.369) 0:01:11.076 ****** 2026-01-10 14:32:48.100025 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.100028 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.100031 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.100034 | orchestrator | 2026-01-10 14:32:48.100037 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-10 14:32:48.100044 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:00.532) 0:01:11.609 ****** 2026-01-10 14:32:48.100047 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.100050 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.100053 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.100056 | orchestrator | 2026-01-10 14:32:48.100063 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-10 14:32:48.100067 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:01.230) 0:01:12.839 ****** 2026-01-10 14:32:48.100070 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100073 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100076 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100079 | orchestrator | 2026-01-10 14:32:48.100082 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-10 14:32:48.100085 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.515) 0:01:13.354 ****** 2026-01-10 14:32:48.100088 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100092 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100095 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100098 | orchestrator | 2026-01-10 14:32:48.100101 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-10 14:32:48.100104 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:00.306) 0:01:13.661 ****** 2026-01-10 14:32:48.100107 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100110 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100113 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100116 | orchestrator | 2026-01-10 14:32:48.100119 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-10 14:32:48.100123 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:00.306) 0:01:13.968 ****** 2026-01-10 14:32:48.100126 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100129 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100132 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100135 | orchestrator | 2026-01-10 14:32:48.100138 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-10 14:32:48.100142 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:00.462) 0:01:14.430 ****** 2026-01-10 14:32:48.100145 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100148 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100175 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100179 | orchestrator | 2026-01-10 14:32:48.100182 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-10 14:32:48.100185 | orchestrator | Saturday 10 January 2026 14:31:34 +0000 (0:00:00.526) 0:01:14.956 ****** 2026-01-10 14:32:48.100188 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100191 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100194 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100197 | orchestrator | 2026-01-10 14:32:48.100200 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-10 14:32:48.100203 | orchestrator | Saturday 10 January 2026 14:31:34 +0000 (0:00:00.381) 0:01:15.337 ****** 2026-01-10 14:32:48.100207 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100210 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100213 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100216 | orchestrator | 2026-01-10 14:32:48.100219 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-10 14:32:48.100222 | orchestrator | Saturday 10 January 2026 14:31:35 +0000 (0:00:00.340) 0:01:15.678 ****** 2026-01-10 14:32:48.100225 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100228 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100231 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100234 | orchestrator | 2026-01-10 14:32:48.100237 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-10 14:32:48.100243 | orchestrator | Saturday 10 January 2026 14:31:35 +0000 (0:00:00.441) 0:01:16.119 ****** 2026-01-10 14:32:48.100246 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100249 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100252 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100255 | orchestrator | 2026-01-10 14:32:48.100258 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-10 14:32:48.100262 | orchestrator | Saturday 10 January 2026 14:31:35 +0000 (0:00:00.290) 0:01:16.410 ****** 2026-01-10 14:32:48.100265 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100270 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100275 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100278 | orchestrator | 2026-01-10 14:32:48.100281 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-10 14:32:48.100284 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:00.284) 0:01:16.695 ****** 2026-01-10 14:32:48.100287 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100290 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100293 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100296 | orchestrator | 2026-01-10 14:32:48.100299 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-10 14:32:48.100303 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:00.295) 0:01:16.990 ****** 2026-01-10 14:32:48.100306 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100309 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100314 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100318 | orchestrator | 2026-01-10 14:32:48.100321 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:32:48.100324 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:00.252) 0:01:17.242 ****** 2026-01-10 14:32:48.100327 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:32:48.100330 | orchestrator | 2026-01-10 14:32:48.100333 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-10 14:32:48.100336 | orchestrator | Saturday 10 January 2026 14:31:37 +0000 (0:00:00.719) 0:01:17.961 ****** 2026-01-10 14:32:48.100340 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.100343 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.100346 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.100349 | orchestrator | 2026-01-10 14:32:48.100352 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-10 14:32:48.100355 | orchestrator | Saturday 10 January 2026 14:31:37 +0000 (0:00:00.435) 0:01:18.397 ****** 2026-01-10 14:32:48.100358 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.100361 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.100364 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.100367 | orchestrator | 2026-01-10 14:32:48.100372 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-10 14:32:48.100376 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:00.451) 0:01:18.848 ****** 2026-01-10 14:32:48.100379 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100382 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100385 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100388 | orchestrator | 2026-01-10 14:32:48.100391 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-10 14:32:48.100394 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:00.444) 0:01:19.292 ****** 2026-01-10 14:32:48.100397 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100400 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100405 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100411 | orchestrator | 2026-01-10 14:32:48.100416 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-10 14:32:48.100421 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:00.321) 0:01:19.614 ****** 2026-01-10 14:32:48.100431 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100435 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100439 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100442 | orchestrator | 2026-01-10 14:32:48.100445 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-10 14:32:48.100448 | orchestrator | Saturday 10 January 2026 14:31:39 +0000 (0:00:00.299) 0:01:19.913 ****** 2026-01-10 14:32:48.100452 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100458 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100461 | orchestrator | 2026-01-10 14:32:48.100464 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-10 14:32:48.100467 | orchestrator | Saturday 10 January 2026 14:31:39 +0000 (0:00:00.324) 0:01:20.237 ****** 2026-01-10 14:32:48.100470 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100473 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100477 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100480 | orchestrator | 2026-01-10 14:32:48.100483 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-10 14:32:48.100486 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:00.571) 0:01:20.809 ****** 2026-01-10 14:32:48.100489 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.100492 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.100495 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.100499 | orchestrator | 2026-01-10 14:32:48.100502 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:32:48.100507 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:00.332) 0:01:21.142 ****** 2026-01-10 14:32:48.100513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100869 | orchestrator | 2026-01-10 14:32:48.100875 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:32:48.100881 | orchestrator | Saturday 10 January 2026 14:31:41 +0000 (0:00:01.389) 0:01:22.531 ****** 2026-01-10 14:32:48.100887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100954 | orchestrator | 2026-01-10 14:32:48.100960 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-10 14:32:48.100966 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:04.374) 0:01:26.905 ****** 2026-01-10 14:32:48.100971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.100994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101030 | orchestrator | 2026-01-10 14:32:48.101036 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101042 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:02.245) 0:01:29.151 ****** 2026-01-10 14:32:48.101048 | orchestrator | 2026-01-10 14:32:48.101054 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101071 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:00.065) 0:01:29.216 ****** 2026-01-10 14:32:48.101077 | orchestrator | 2026-01-10 14:32:48.101082 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101088 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:00.069) 0:01:29.286 ****** 2026-01-10 14:32:48.101093 | orchestrator | 2026-01-10 14:32:48.101098 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:32:48.101103 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:00.069) 0:01:29.355 ****** 2026-01-10 14:32:48.101108 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101113 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101118 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101123 | orchestrator | 2026-01-10 14:32:48.101128 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:32:48.101133 | orchestrator | Saturday 10 January 2026 14:31:56 +0000 (0:00:07.346) 0:01:36.702 ****** 2026-01-10 14:32:48.101139 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101144 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101149 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101155 | orchestrator | 2026-01-10 14:32:48.101160 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-10 14:32:48.101166 | orchestrator | Saturday 10 January 2026 14:31:59 +0000 (0:00:03.415) 0:01:40.117 ****** 2026-01-10 14:32:48.101171 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101175 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101180 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101186 | orchestrator | 2026-01-10 14:32:48.101191 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:32:48.101196 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:07.611) 0:01:47.729 ****** 2026-01-10 14:32:48.101201 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.101206 | orchestrator | 2026-01-10 14:32:48.101211 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:32:48.101217 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:00.461) 0:01:48.190 ****** 2026-01-10 14:32:48.101222 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101233 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101238 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101243 | orchestrator | 2026-01-10 14:32:48.101248 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:32:48.101251 | orchestrator | Saturday 10 January 2026 14:32:08 +0000 (0:00:00.900) 0:01:49.091 ****** 2026-01-10 14:32:48.101255 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.101260 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.101265 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101270 | orchestrator | 2026-01-10 14:32:48.101275 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:32:48.101280 | orchestrator | Saturday 10 January 2026 14:32:09 +0000 (0:00:00.745) 0:01:49.836 ****** 2026-01-10 14:32:48.101285 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101290 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101295 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101300 | orchestrator | 2026-01-10 14:32:48.101305 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:32:48.101311 | orchestrator | Saturday 10 January 2026 14:32:09 +0000 (0:00:00.770) 0:01:50.607 ****** 2026-01-10 14:32:48.101316 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.101321 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.101326 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101337 | orchestrator | 2026-01-10 14:32:48.101343 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:32:48.101348 | orchestrator | Saturday 10 January 2026 14:32:10 +0000 (0:00:00.931) 0:01:51.538 ****** 2026-01-10 14:32:48.101351 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101354 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101361 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101365 | orchestrator | 2026-01-10 14:32:48.101369 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:32:48.101372 | orchestrator | Saturday 10 January 2026 14:32:11 +0000 (0:00:00.810) 0:01:52.349 ****** 2026-01-10 14:32:48.101376 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101380 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101383 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101387 | orchestrator | 2026-01-10 14:32:48.101390 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-10 14:32:48.101394 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.817) 0:01:53.166 ****** 2026-01-10 14:32:48.101399 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101404 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101409 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101415 | orchestrator | 2026-01-10 14:32:48.101420 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:32:48.101426 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.309) 0:01:53.475 ****** 2026-01-10 14:32:48.101434 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101440 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101446 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101462 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101487 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101498 | orchestrator | 2026-01-10 14:32:48.101502 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:32:48.101505 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:01.469) 0:01:54.945 ****** 2026-01-10 14:32:48.101509 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101514 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101518 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101528 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101539 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101547 | orchestrator | 2026-01-10 14:32:48.101550 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-10 14:32:48.101553 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:03.981) 0:01:58.926 ****** 2026-01-10 14:32:48.101560 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:48.101597 | orchestrator | 2026-01-10 14:32:48.101600 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101604 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:02.809) 0:02:01.736 ****** 2026-01-10 14:32:48.101607 | orchestrator | 2026-01-10 14:32:48.101611 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101614 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:00.077) 0:02:01.814 ****** 2026-01-10 14:32:48.101618 | orchestrator | 2026-01-10 14:32:48.101621 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:32:48.101625 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:00.084) 0:02:01.898 ****** 2026-01-10 14:32:48.101628 | orchestrator | 2026-01-10 14:32:48.101632 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:32:48.101635 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:00.064) 0:02:01.962 ****** 2026-01-10 14:32:48.101639 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101642 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101646 | orchestrator | 2026-01-10 14:32:48.101651 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:32:48.101655 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:06.129) 0:02:08.092 ****** 2026-01-10 14:32:48.101658 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101662 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101665 | orchestrator | 2026-01-10 14:32:48.101669 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-10 14:32:48.101674 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:06.264) 0:02:14.357 ****** 2026-01-10 14:32:48.101678 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:48.101681 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:48.101684 | orchestrator | 2026-01-10 14:32:48.101687 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:32:48.101690 | orchestrator | Saturday 10 January 2026 14:32:40 +0000 (0:00:06.695) 0:02:21.052 ****** 2026-01-10 14:32:48.101693 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:48.101696 | orchestrator | 2026-01-10 14:32:48.101699 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:32:48.101702 | orchestrator | Saturday 10 January 2026 14:32:40 +0000 (0:00:00.203) 0:02:21.256 ****** 2026-01-10 14:32:48.101707 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101710 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101714 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101717 | orchestrator | 2026-01-10 14:32:48.101720 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:32:48.101723 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.891) 0:02:22.148 ****** 2026-01-10 14:32:48.101726 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.101729 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.101732 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101735 | orchestrator | 2026-01-10 14:32:48.101738 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:32:48.101742 | orchestrator | Saturday 10 January 2026 14:32:42 +0000 (0:00:00.755) 0:02:22.903 ****** 2026-01-10 14:32:48.101745 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101748 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101751 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101754 | orchestrator | 2026-01-10 14:32:48.101757 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:32:48.101760 | orchestrator | Saturday 10 January 2026 14:32:43 +0000 (0:00:01.118) 0:02:24.022 ****** 2026-01-10 14:32:48.101763 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:48.101766 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:48.101770 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:48.101773 | orchestrator | 2026-01-10 14:32:48.101776 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:32:48.101790 | orchestrator | Saturday 10 January 2026 14:32:44 +0000 (0:00:00.874) 0:02:24.896 ****** 2026-01-10 14:32:48.101793 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101797 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101800 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101803 | orchestrator | 2026-01-10 14:32:48.101806 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:32:48.101809 | orchestrator | Saturday 10 January 2026 14:32:45 +0000 (0:00:00.819) 0:02:25.716 ****** 2026-01-10 14:32:48.101812 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:48.101815 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:48.101820 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:48.101825 | orchestrator | 2026-01-10 14:32:48.101830 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:48.101835 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:32:48.101841 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-10 14:32:48.101846 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-10 14:32:48.101851 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:48.101863 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:48.101868 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:48.101873 | orchestrator | 2026-01-10 14:32:48.101878 | orchestrator | 2026-01-10 14:32:48.101884 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:48.101889 | orchestrator | Saturday 10 January 2026 14:32:45 +0000 (0:00:00.920) 0:02:26.636 ****** 2026-01-10 14:32:48.101894 | orchestrator | =============================================================================== 2026-01-10 14:32:48.101899 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.32s 2026-01-10 14:32:48.101904 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.91s 2026-01-10 14:32:48.101909 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.31s 2026-01-10 14:32:48.101914 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.48s 2026-01-10 14:32:48.101919 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.68s 2026-01-10 14:32:48.101924 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.37s 2026-01-10 14:32:48.101930 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.98s 2026-01-10 14:32:48.101939 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.86s 2026-01-10 14:32:48.101944 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2026-01-10 14:32:48.101949 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.25s 2026-01-10 14:32:48.101952 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.24s 2026-01-10 14:32:48.101955 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.92s 2026-01-10 14:32:48.101958 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.80s 2026-01-10 14:32:48.101961 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.65s 2026-01-10 14:32:48.101966 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.60s 2026-01-10 14:32:48.101971 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-01-10 14:32:48.101976 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2026-01-10 14:32:48.101984 | orchestrator | ovn-db : Establish whether the OVN SB cluster has already existed ------- 1.23s 2026-01-10 14:32:48.101990 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.23s 2026-01-10 14:32:48.101995 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.22s 2026-01-10 14:32:48.102000 | orchestrator | 2026-01-10 14:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:51.150288 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:51.152363 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:51.152414 | orchestrator | 2026-01-10 14:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:54.192923 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:54.194154 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:54.194182 | orchestrator | 2026-01-10 14:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:57.253464 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:32:57.254363 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:32:57.256985 | orchestrator | 2026-01-10 14:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:00.309065 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:00.318630 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:00.318683 | orchestrator | 2026-01-10 14:33:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:03.346139 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:03.346438 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:03.346458 | orchestrator | 2026-01-10 14:33:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:06.377951 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:06.380146 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:06.380229 | orchestrator | 2026-01-10 14:33:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:09.427686 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:09.429320 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:09.429381 | orchestrator | 2026-01-10 14:33:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:12.474151 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:12.477254 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:12.477301 | orchestrator | 2026-01-10 14:33:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:15.526535 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:15.527936 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:15.527977 | orchestrator | 2026-01-10 14:33:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:18.571229 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:18.571581 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:18.571596 | orchestrator | 2026-01-10 14:33:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:21.629899 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:21.631425 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:21.631581 | orchestrator | 2026-01-10 14:33:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:24.675926 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:24.678671 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:24.678735 | orchestrator | 2026-01-10 14:33:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:27.725733 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:27.726783 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:27.727333 | orchestrator | 2026-01-10 14:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:30.770167 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:30.772301 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:30.772354 | orchestrator | 2026-01-10 14:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:33.816088 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:33.817064 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:33.817135 | orchestrator | 2026-01-10 14:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:36.862874 | orchestrator | 2026-01-10 14:33:36 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:36.863062 | orchestrator | 2026-01-10 14:33:36 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:36.863080 | orchestrator | 2026-01-10 14:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:39.897923 | orchestrator | 2026-01-10 14:33:39 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:39.898562 | orchestrator | 2026-01-10 14:33:39 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:39.898947 | orchestrator | 2026-01-10 14:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:42.947282 | orchestrator | 2026-01-10 14:33:42 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:42.948361 | orchestrator | 2026-01-10 14:33:42 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:42.948737 | orchestrator | 2026-01-10 14:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:46.007799 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:46.009358 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:46.009944 | orchestrator | 2026-01-10 14:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:49.062933 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:49.064673 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:49.064748 | orchestrator | 2026-01-10 14:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:52.109500 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:52.109606 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:52.109614 | orchestrator | 2026-01-10 14:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:55.157644 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:55.158472 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:55.158535 | orchestrator | 2026-01-10 14:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:58.201414 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:33:58.204765 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:33:58.204911 | orchestrator | 2026-01-10 14:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:01.246713 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:01.247572 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:01.247649 | orchestrator | 2026-01-10 14:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:04.287606 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:04.287772 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:04.287784 | orchestrator | 2026-01-10 14:34:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:07.321195 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:07.322696 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:07.322947 | orchestrator | 2026-01-10 14:34:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:10.360751 | orchestrator | 2026-01-10 14:34:10 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:10.362772 | orchestrator | 2026-01-10 14:34:10 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:10.362877 | orchestrator | 2026-01-10 14:34:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:13.428178 | orchestrator | 2026-01-10 14:34:13 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:13.429502 | orchestrator | 2026-01-10 14:34:13 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:13.429603 | orchestrator | 2026-01-10 14:34:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:16.477901 | orchestrator | 2026-01-10 14:34:16 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:16.479875 | orchestrator | 2026-01-10 14:34:16 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:16.479913 | orchestrator | 2026-01-10 14:34:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:19.513822 | orchestrator | 2026-01-10 14:34:19 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:19.515824 | orchestrator | 2026-01-10 14:34:19 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:19.515930 | orchestrator | 2026-01-10 14:34:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:22.552887 | orchestrator | 2026-01-10 14:34:22 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:22.554350 | orchestrator | 2026-01-10 14:34:22 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:22.554432 | orchestrator | 2026-01-10 14:34:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:25.602191 | orchestrator | 2026-01-10 14:34:25 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:25.605114 | orchestrator | 2026-01-10 14:34:25 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:25.605168 | orchestrator | 2026-01-10 14:34:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:28.648428 | orchestrator | 2026-01-10 14:34:28 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:28.650812 | orchestrator | 2026-01-10 14:34:28 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:28.650887 | orchestrator | 2026-01-10 14:34:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:31.693459 | orchestrator | 2026-01-10 14:34:31 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:31.694393 | orchestrator | 2026-01-10 14:34:31 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:31.694454 | orchestrator | 2026-01-10 14:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:34.746766 | orchestrator | 2026-01-10 14:34:34 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:34.747445 | orchestrator | 2026-01-10 14:34:34 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:34.747483 | orchestrator | 2026-01-10 14:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:37.798152 | orchestrator | 2026-01-10 14:34:37 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:37.800930 | orchestrator | 2026-01-10 14:34:37 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:37.801006 | orchestrator | 2026-01-10 14:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:40.843296 | orchestrator | 2026-01-10 14:34:40 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:40.844974 | orchestrator | 2026-01-10 14:34:40 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:40.845027 | orchestrator | 2026-01-10 14:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:43.893466 | orchestrator | 2026-01-10 14:34:43 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:43.895352 | orchestrator | 2026-01-10 14:34:43 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:43.895461 | orchestrator | 2026-01-10 14:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:46.952410 | orchestrator | 2026-01-10 14:34:46 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:46.954547 | orchestrator | 2026-01-10 14:34:46 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:46.954796 | orchestrator | 2026-01-10 14:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:50.020535 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:50.021098 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:50.021146 | orchestrator | 2026-01-10 14:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:53.063584 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:53.065577 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:53.065675 | orchestrator | 2026-01-10 14:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:56.121429 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:56.122143 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:56.122508 | orchestrator | 2026-01-10 14:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:59.165871 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:34:59.168430 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:34:59.168973 | orchestrator | 2026-01-10 14:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:02.213003 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:02.217988 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:02.218113 | orchestrator | 2026-01-10 14:35:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:05.274982 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:05.276019 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:05.276045 | orchestrator | 2026-01-10 14:35:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:08.319979 | orchestrator | 2026-01-10 14:35:08 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:08.322722 | orchestrator | 2026-01-10 14:35:08 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:08.322916 | orchestrator | 2026-01-10 14:35:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:11.374980 | orchestrator | 2026-01-10 14:35:11 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:11.376049 | orchestrator | 2026-01-10 14:35:11 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:11.376107 | orchestrator | 2026-01-10 14:35:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:14.418362 | orchestrator | 2026-01-10 14:35:14 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:14.418900 | orchestrator | 2026-01-10 14:35:14 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:14.418953 | orchestrator | 2026-01-10 14:35:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:17.452423 | orchestrator | 2026-01-10 14:35:17 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:17.452517 | orchestrator | 2026-01-10 14:35:17 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:17.452526 | orchestrator | 2026-01-10 14:35:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:20.493674 | orchestrator | 2026-01-10 14:35:20 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:20.495498 | orchestrator | 2026-01-10 14:35:20 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:20.495579 | orchestrator | 2026-01-10 14:35:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:23.539730 | orchestrator | 2026-01-10 14:35:23 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:23.540754 | orchestrator | 2026-01-10 14:35:23 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:23.540806 | orchestrator | 2026-01-10 14:35:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:26.574587 | orchestrator | 2026-01-10 14:35:26 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:26.576281 | orchestrator | 2026-01-10 14:35:26 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:26.576388 | orchestrator | 2026-01-10 14:35:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:29.617007 | orchestrator | 2026-01-10 14:35:29 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:29.617472 | orchestrator | 2026-01-10 14:35:29 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:29.617525 | orchestrator | 2026-01-10 14:35:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:32.662163 | orchestrator | 2026-01-10 14:35:32 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:32.663433 | orchestrator | 2026-01-10 14:35:32 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:32.663546 | orchestrator | 2026-01-10 14:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:35.714302 | orchestrator | 2026-01-10 14:35:35 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:35.716218 | orchestrator | 2026-01-10 14:35:35 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:35.716311 | orchestrator | 2026-01-10 14:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:38.774457 | orchestrator | 2026-01-10 14:35:38 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:38.776990 | orchestrator | 2026-01-10 14:35:38 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:38.777100 | orchestrator | 2026-01-10 14:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:41.833439 | orchestrator | 2026-01-10 14:35:41 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:41.835694 | orchestrator | 2026-01-10 14:35:41 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:41.835765 | orchestrator | 2026-01-10 14:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:44.884190 | orchestrator | 2026-01-10 14:35:44 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:44.888735 | orchestrator | 2026-01-10 14:35:44 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:44.888891 | orchestrator | 2026-01-10 14:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:47.932927 | orchestrator | 2026-01-10 14:35:47 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:47.934665 | orchestrator | 2026-01-10 14:35:47 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:47.934741 | orchestrator | 2026-01-10 14:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:50.979504 | orchestrator | 2026-01-10 14:35:50 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state STARTED 2026-01-10 14:35:50.980196 | orchestrator | 2026-01-10 14:35:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:50.980433 | orchestrator | 2026-01-10 14:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:54.047735 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task c6537f2e-5b40-4c44-a6ce-47dfe5c0c1d6 is in state SUCCESS 2026-01-10 14:35:54.048952 | orchestrator | 2026-01-10 14:35:54.049023 | orchestrator | 2026-01-10 14:35:54.049033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:35:54.049041 | orchestrator | 2026-01-10 14:35:54.049048 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:35:54.049056 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.330) 0:00:00.330 ****** 2026-01-10 14:35:54.049063 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.049071 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.049078 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.049084 | orchestrator | 2026-01-10 14:35:54.049091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:35:54.049098 | orchestrator | Saturday 10 January 2026 14:29:17 +0000 (0:00:00.347) 0:00:00.678 ****** 2026-01-10 14:35:54.049105 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-10 14:35:54.049112 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-10 14:35:54.049144 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-10 14:35:54.049150 | orchestrator | 2026-01-10 14:35:54.049156 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-10 14:35:54.049162 | orchestrator | 2026-01-10 14:35:54.049167 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:35:54.049173 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:00.432) 0:00:01.111 ****** 2026-01-10 14:35:54.049179 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.049186 | orchestrator | 2026-01-10 14:35:54.049191 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-10 14:35:54.049225 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:00.650) 0:00:01.761 ****** 2026-01-10 14:35:54.049282 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.049289 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.049294 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.049300 | orchestrator | 2026-01-10 14:35:54.049306 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:35:54.049344 | orchestrator | Saturday 10 January 2026 14:29:20 +0000 (0:00:01.537) 0:00:03.298 ****** 2026-01-10 14:35:54.049353 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.049360 | orchestrator | 2026-01-10 14:35:54.049367 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-10 14:35:54.049374 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:00.615) 0:00:03.913 ****** 2026-01-10 14:35:54.049381 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.049394 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.049401 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.049407 | orchestrator | 2026-01-10 14:35:54.049414 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-10 14:35:54.049420 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:00.682) 0:00:04.596 ****** 2026-01-10 14:35:54.049426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049452 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:35:54.049464 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:35:54.049472 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:35:54.049487 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:35:54.049493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:35:54.049500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:35:54.049506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:35:54.049512 | orchestrator | 2026-01-10 14:35:54.049519 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:35:54.049525 | orchestrator | Saturday 10 January 2026 14:29:24 +0000 (0:00:02.322) 0:00:06.918 ****** 2026-01-10 14:35:54.049531 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:35:54.049538 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:35:54.049544 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:35:54.049550 | orchestrator | 2026-01-10 14:35:54.049557 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:35:54.049563 | orchestrator | Saturday 10 January 2026 14:29:24 +0000 (0:00:00.720) 0:00:07.639 ****** 2026-01-10 14:35:54.049570 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:35:54.049576 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:35:54.049583 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:35:54.049589 | orchestrator | 2026-01-10 14:35:54.049595 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:35:54.049601 | orchestrator | Saturday 10 January 2026 14:29:26 +0000 (0:00:01.319) 0:00:08.958 ****** 2026-01-10 14:35:54.049608 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-10 14:35:54.049624 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.049643 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-10 14:35:54.049650 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.049671 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-10 14:35:54.049678 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.049684 | orchestrator | 2026-01-10 14:35:54.049691 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-10 14:35:54.049710 | orchestrator | Saturday 10 January 2026 14:29:27 +0000 (0:00:01.056) 0:00:10.015 ****** 2026-01-10 14:35:54.049721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.049914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.049923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.049930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.049937 | orchestrator | 2026-01-10 14:35:54.049944 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-10 14:35:54.049958 | orchestrator | Saturday 10 January 2026 14:29:30 +0000 (0:00:02.780) 0:00:12.796 ****** 2026-01-10 14:35:54.049965 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.049971 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.049978 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.049984 | orchestrator | 2026-01-10 14:35:54.049991 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-10 14:35:54.050000 | orchestrator | Saturday 10 January 2026 14:29:31 +0000 (0:00:01.437) 0:00:14.233 ****** 2026-01-10 14:35:54.050006 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-10 14:35:54.050055 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-10 14:35:54.050063 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-10 14:35:54.050071 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-10 14:35:54.050078 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-10 14:35:54.050084 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-10 14:35:54.050091 | orchestrator | 2026-01-10 14:35:54.050097 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-10 14:35:54.050104 | orchestrator | Saturday 10 January 2026 14:29:33 +0000 (0:00:01.975) 0:00:16.209 ****** 2026-01-10 14:35:54.050111 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.050117 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.050124 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.050131 | orchestrator | 2026-01-10 14:35:54.050137 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-10 14:35:54.050143 | orchestrator | Saturday 10 January 2026 14:29:34 +0000 (0:00:01.206) 0:00:17.415 ****** 2026-01-10 14:35:54.050148 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.050154 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.050161 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.050167 | orchestrator | 2026-01-10 14:35:54.050174 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-10 14:35:54.050181 | orchestrator | Saturday 10 January 2026 14:29:37 +0000 (0:00:02.582) 0:00:19.998 ****** 2026-01-10 14:35:54.050188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.050213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.050221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050248 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.050255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.050265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.050312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050328 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.050350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.050363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.050495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050522 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.050529 | orchestrator | 2026-01-10 14:35:54.050536 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-10 14:35:54.050542 | orchestrator | Saturday 10 January 2026 14:29:38 +0000 (0:00:00.729) 0:00:20.727 ****** 2026-01-10 14:35:54.050549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.050650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9', '__omit_place_holder__dd1bbac5fd0ee518895e64b9a24cf9a4ef7e51f9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:35:54.050664 | orchestrator | 2026-01-10 14:35:54.050671 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-10 14:35:54.050678 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:00:02.900) 0:00:23.628 ****** 2026-01-10 14:35:54.050684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.050738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.050745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.050751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.050758 | orchestrator | 2026-01-10 14:35:54.050764 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-10 14:35:54.050770 | orchestrator | Saturday 10 January 2026 14:29:45 +0000 (0:00:04.113) 0:00:27.741 ****** 2026-01-10 14:35:54.050777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:35:54.050784 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:35:54.050791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:35:54.050803 | orchestrator | 2026-01-10 14:35:54.050850 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-10 14:35:54.050857 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:04.008) 0:00:31.749 ****** 2026-01-10 14:35:54.050864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:35:54.050871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:35:54.050883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:35:54.050890 | orchestrator | 2026-01-10 14:35:54.050921 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-10 14:35:54.050929 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:05.044) 0:00:36.794 ****** 2026-01-10 14:35:54.050935 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.050942 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.050997 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.051005 | orchestrator | 2026-01-10 14:35:54.051012 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-10 14:35:54.051018 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:00.851) 0:00:37.646 ****** 2026-01-10 14:35:54.051025 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:35:54.051033 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:35:54.051040 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:35:54.051046 | orchestrator | 2026-01-10 14:35:54.051053 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-10 14:35:54.051059 | orchestrator | Saturday 10 January 2026 14:29:58 +0000 (0:00:03.566) 0:00:41.213 ****** 2026-01-10 14:35:54.051065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:35:54.051072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:35:54.051079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:35:54.051085 | orchestrator | 2026-01-10 14:35:54.051092 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-10 14:35:54.051098 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:02.949) 0:00:44.162 ****** 2026-01-10 14:35:54.051105 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-10 14:35:54.051111 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-10 14:35:54.051117 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-10 14:35:54.051123 | orchestrator | 2026-01-10 14:35:54.051129 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-10 14:35:54.051135 | orchestrator | Saturday 10 January 2026 14:30:03 +0000 (0:00:01.894) 0:00:46.057 ****** 2026-01-10 14:35:54.051141 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-10 14:35:54.051179 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-10 14:35:54.051187 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-10 14:35:54.051194 | orchestrator | 2026-01-10 14:35:54.051201 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:35:54.051208 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:01.774) 0:00:47.834 ****** 2026-01-10 14:35:54.051215 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.051228 | orchestrator | 2026-01-10 14:35:54.051246 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-10 14:35:54.051252 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:01.356) 0:00:49.190 ****** 2026-01-10 14:35:54.051259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.051320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.051328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.051335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.051341 | orchestrator | 2026-01-10 14:35:54.051347 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-10 14:35:54.051353 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:04.566) 0:00:53.756 ****** 2026-01-10 14:35:54.051375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051417 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.051424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051431 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.051444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051536 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.051543 | orchestrator | 2026-01-10 14:35:54.051550 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-10 14:35:54.051557 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:00.657) 0:00:54.414 ****** 2026-01-10 14:35:54.051564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051590 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.051619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051685 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.051692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051717 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.051724 | orchestrator | 2026-01-10 14:35:54.051730 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:35:54.051737 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:01.043) 0:00:55.457 ****** 2026-01-10 14:35:54.051743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051772 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.051778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051803 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.051809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051888 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.051894 | orchestrator | 2026-01-10 14:35:54.051901 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:35:54.051907 | orchestrator | Saturday 10 January 2026 14:30:13 +0000 (0:00:00.710) 0:00:56.167 ****** 2026-01-10 14:35:54.051914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.051954 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.051961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.051968 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.051983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.051996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052010 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052017 | orchestrator | 2026-01-10 14:35:54.052023 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:35:54.052030 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.694) 0:00:56.862 ****** 2026-01-10 14:35:54.052038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052058 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.052082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052109 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.052116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052136 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052173 | orchestrator | 2026-01-10 14:35:54.052182 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-10 14:35:54.052189 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.837) 0:00:57.699 ****** 2026-01-10 14:35:54.052195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052228 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.052235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052348 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.052356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052429 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052438 | orchestrator | 2026-01-10 14:35:54.052445 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-10 14:35:54.052451 | orchestrator | Saturday 10 January 2026 14:30:16 +0000 (0:00:01.379) 0:00:59.079 ****** 2026-01-10 14:35:54.052458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052480 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.052487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052524 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.052531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052551 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052557 | orchestrator | 2026-01-10 14:35:54.052568 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-10 14:35:54.052578 | orchestrator | Saturday 10 January 2026 14:30:16 +0000 (0:00:00.522) 0:00:59.601 ****** 2026-01-10 14:35:54.052585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052615 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.052627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052647 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.052653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:35:54.052660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:35:54.052672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:35:54.052678 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052685 | orchestrator | 2026-01-10 14:35:54.052691 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-10 14:35:54.052698 | orchestrator | Saturday 10 January 2026 14:30:17 +0000 (0:00:00.717) 0:01:00.319 ****** 2026-01-10 14:35:54.052704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:35:54.052712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:35:54.052727 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:35:54.052733 | orchestrator | 2026-01-10 14:35:54.052740 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-10 14:35:54.052746 | orchestrator | Saturday 10 January 2026 14:30:19 +0000 (0:00:01.524) 0:01:01.844 ****** 2026-01-10 14:35:54.052752 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:35:54.052759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:35:54.052765 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:35:54.052771 | orchestrator | 2026-01-10 14:35:54.052777 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-10 14:35:54.052783 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:01.662) 0:01:03.507 ****** 2026-01-10 14:35:54.052790 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:35:54.052796 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:35:54.052802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:35:54.052808 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:35:54.052840 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.052876 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:35:54.052883 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.052889 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:35:54.052896 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.052902 | orchestrator | 2026-01-10 14:35:54.052908 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-10 14:35:54.052915 | orchestrator | Saturday 10 January 2026 14:30:21 +0000 (0:00:00.976) 0:01:04.483 ****** 2026-01-10 14:35:54.052937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.052952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.052999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:35:54.053064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.053072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.053080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:35:54.053144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.053166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.053173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:35:54.053180 | orchestrator | 2026-01-10 14:35:54.053187 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-10 14:35:54.053194 | orchestrator | Saturday 10 January 2026 14:30:24 +0000 (0:00:02.839) 0:01:07.323 ****** 2026-01-10 14:35:54.053201 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.053208 | orchestrator | 2026-01-10 14:35:54.053214 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-10 14:35:54.053220 | orchestrator | Saturday 10 January 2026 14:30:25 +0000 (0:00:00.634) 0:01:07.957 ****** 2026-01-10 14:35:54.053232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:35:54.053245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:35:54.054159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-10 14:35:54.054201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054227 | orchestrator | 2026-01-10 14:35:54.054235 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-10 14:35:54.054243 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:05.080) 0:01:13.038 ****** 2026-01-10 14:35:54.054250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:35:54.054262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054291 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.054299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:35:54.054307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054332 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.054338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-10 14:35:54.054352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.054359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054373 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.054379 | orchestrator | 2026-01-10 14:35:54.054386 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-10 14:35:54.054393 | orchestrator | Saturday 10 January 2026 14:30:31 +0000 (0:00:01.111) 0:01:14.149 ****** 2026-01-10 14:35:54.054401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054418 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.054471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054486 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.054493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-10 14:35:54.054511 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.054518 | orchestrator | 2026-01-10 14:35:54.054525 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-10 14:35:54.054531 | orchestrator | Saturday 10 January 2026 14:30:32 +0000 (0:00:00.944) 0:01:15.094 ****** 2026-01-10 14:35:54.054543 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.054550 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.054556 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.054563 | orchestrator | 2026-01-10 14:35:54.054569 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-10 14:35:54.054575 | orchestrator | Saturday 10 January 2026 14:30:33 +0000 (0:00:01.243) 0:01:16.337 ****** 2026-01-10 14:35:54.054580 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.054586 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.054592 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.054598 | orchestrator | 2026-01-10 14:35:54.054605 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-10 14:35:54.054611 | orchestrator | Saturday 10 January 2026 14:30:36 +0000 (0:00:02.921) 0:01:19.258 ****** 2026-01-10 14:35:54.054617 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.054623 | orchestrator | 2026-01-10 14:35:54.054632 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-10 14:35:54.054646 | orchestrator | Saturday 10 January 2026 14:30:37 +0000 (0:00:00.957) 0:01:20.216 ****** 2026-01-10 14:35:54.054655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.054667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.054722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.054765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054790 | orchestrator | 2026-01-10 14:35:54.054803 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-10 14:35:54.054841 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:03.525) 0:01:23.742 ****** 2026-01-10 14:35:54.054854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.054865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054880 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.054887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.054894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054918 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.054931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.054952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.054972 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.054979 | orchestrator | 2026-01-10 14:35:54.054986 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-10 14:35:54.054992 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.623) 0:01:24.365 ****** 2026-01-10 14:35:54.054999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055016 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055040 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055059 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055065 | orchestrator | 2026-01-10 14:35:54.055072 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-10 14:35:54.055078 | orchestrator | Saturday 10 January 2026 14:30:42 +0000 (0:00:01.129) 0:01:25.494 ****** 2026-01-10 14:35:54.055085 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.055091 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.055097 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.055104 | orchestrator | 2026-01-10 14:35:54.055110 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-10 14:35:54.055117 | orchestrator | Saturday 10 January 2026 14:30:44 +0000 (0:00:01.481) 0:01:26.975 ****** 2026-01-10 14:35:54.055123 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.055133 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.055138 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.055144 | orchestrator | 2026-01-10 14:35:54.055149 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-10 14:35:54.055155 | orchestrator | Saturday 10 January 2026 14:30:46 +0000 (0:00:02.200) 0:01:29.176 ****** 2026-01-10 14:35:54.055162 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055168 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055173 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055179 | orchestrator | 2026-01-10 14:35:54.055184 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-10 14:35:54.055190 | orchestrator | Saturday 10 January 2026 14:30:46 +0000 (0:00:00.311) 0:01:29.488 ****** 2026-01-10 14:35:54.055195 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.055201 | orchestrator | 2026-01-10 14:35:54.055207 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-10 14:35:54.055212 | orchestrator | Saturday 10 January 2026 14:30:47 +0000 (0:00:00.857) 0:01:30.346 ****** 2026-01-10 14:35:54.055223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:35:54.055231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:35:54.055243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:35:54.055249 | orchestrator | 2026-01-10 14:35:54.055256 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-10 14:35:54.055262 | orchestrator | Saturday 10 January 2026 14:30:50 +0000 (0:00:02.789) 0:01:33.135 ****** 2026-01-10 14:35:54.055272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:35:54.055278 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:35:54.055295 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:35:54.055312 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055319 | orchestrator | 2026-01-10 14:35:54.055325 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-10 14:35:54.055332 | orchestrator | Saturday 10 January 2026 14:30:52 +0000 (0:00:01.795) 0:01:34.931 ****** 2026-01-10 14:35:54.055339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055356 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055376 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:35:54.055398 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055405 | orchestrator | 2026-01-10 14:35:54.055411 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-10 14:35:54.055418 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:02.432) 0:01:37.363 ****** 2026-01-10 14:35:54.055424 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055431 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055440 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055446 | orchestrator | 2026-01-10 14:35:54.055452 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-10 14:35:54.055459 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:00.900) 0:01:38.264 ****** 2026-01-10 14:35:54.055470 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055477 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055484 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055490 | orchestrator | 2026-01-10 14:35:54.055496 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-10 14:35:54.055502 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:01.590) 0:01:39.854 ****** 2026-01-10 14:35:54.055508 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.055515 | orchestrator | 2026-01-10 14:35:54.055521 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-10 14:35:54.055528 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:00.723) 0:01:40.577 ****** 2026-01-10 14:35:54.055535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.055542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.055588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.055619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055651 | orchestrator | 2026-01-10 14:35:54.055657 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-10 14:35:54.055664 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:04.746) 0:01:45.324 ****** 2026-01-10 14:35:54.055671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.055678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055710 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.055723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055744 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.055768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.055788 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055795 | orchestrator | 2026-01-10 14:35:54.055802 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-10 14:35:54.055809 | orchestrator | Saturday 10 January 2026 14:31:03 +0000 (0:00:00.989) 0:01:46.313 ****** 2026-01-10 14:35:54.055867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055882 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.055889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055907 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.055918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-10 14:35:54.055932 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.055938 | orchestrator | 2026-01-10 14:35:54.055945 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-10 14:35:54.055951 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:01.053) 0:01:47.367 ****** 2026-01-10 14:35:54.055958 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.055965 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.055971 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.055978 | orchestrator | 2026-01-10 14:35:54.055984 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-10 14:35:54.055990 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:01.441) 0:01:48.808 ****** 2026-01-10 14:35:54.055997 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.056003 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.056010 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.056017 | orchestrator | 2026-01-10 14:35:54.056028 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-10 14:35:54.056034 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:02.383) 0:01:51.192 ****** 2026-01-10 14:35:54.056041 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.056047 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.056053 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.056058 | orchestrator | 2026-01-10 14:35:54.056065 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-10 14:35:54.056073 | orchestrator | Saturday 10 January 2026 14:31:09 +0000 (0:00:00.578) 0:01:51.771 ****** 2026-01-10 14:35:54.056081 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.056088 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.056094 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.056101 | orchestrator | 2026-01-10 14:35:54.056107 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-10 14:35:54.056114 | orchestrator | Saturday 10 January 2026 14:31:09 +0000 (0:00:00.351) 0:01:52.122 ****** 2026-01-10 14:35:54.056121 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.056127 | orchestrator | 2026-01-10 14:35:54.056134 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-10 14:35:54.056139 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:00.795) 0:01:52.918 ****** 2026-01-10 14:35:54.056146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:35:54.056159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:35:54.056461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:35:54.056525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056574 | orchestrator | 2026-01-10 14:35:54.056581 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-10 14:35:54.056588 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:06.037) 0:01:58.955 ****** 2026-01-10 14:35:54.056595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:35:54.056603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:35:54.056613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056726 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.056734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056748 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.056758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:35:54.056768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:35:54.056775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.056867 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.056874 | orchestrator | 2026-01-10 14:35:54.056880 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-10 14:35:54.056887 | orchestrator | Saturday 10 January 2026 14:31:17 +0000 (0:00:00.863) 0:01:59.818 ****** 2026-01-10 14:35:54.056894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056908 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.056919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056938 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.056945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-10 14:35:54.056958 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057003 | orchestrator | 2026-01-10 14:35:54.057010 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-10 14:35:54.057016 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:01.145) 0:02:00.964 ****** 2026-01-10 14:35:54.057022 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.057028 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.057034 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.057039 | orchestrator | 2026-01-10 14:35:54.057049 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-10 14:35:54.057057 | orchestrator | Saturday 10 January 2026 14:31:19 +0000 (0:00:01.372) 0:02:02.336 ****** 2026-01-10 14:35:54.057064 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.057074 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.057084 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.057096 | orchestrator | 2026-01-10 14:35:54.057108 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-10 14:35:54.057119 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:01.630) 0:02:03.966 ****** 2026-01-10 14:35:54.057131 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057139 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057145 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057151 | orchestrator | 2026-01-10 14:35:54.057157 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-10 14:35:54.057167 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:00.466) 0:02:04.433 ****** 2026-01-10 14:35:54.057179 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.057244 | orchestrator | 2026-01-10 14:35:54.057252 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-10 14:35:54.057258 | orchestrator | Saturday 10 January 2026 14:31:22 +0000 (0:00:00.777) 0:02:05.211 ****** 2026-01-10 14:35:54.057271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:35:54.057294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:35:54.057333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:35:54.057381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057400 | orchestrator | 2026-01-10 14:35:54.057412 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-10 14:35:54.057424 | orchestrator | Saturday 10 January 2026 14:31:28 +0000 (0:00:06.324) 0:02:11.536 ****** 2026-01-10 14:35:54.057437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:35:54.057459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057471 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:35:54.057491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057502 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:35:54.057524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.057535 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057542 | orchestrator | 2026-01-10 14:35:54.057548 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-10 14:35:54.057555 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:04.786) 0:02:16.323 ****** 2026-01-10 14:35:54.057561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057579 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057599 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:35:54.057623 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057629 | orchestrator | 2026-01-10 14:35:54.057635 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-10 14:35:54.057642 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:03.079) 0:02:19.402 ****** 2026-01-10 14:35:54.057648 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.057654 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.057661 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.057667 | orchestrator | 2026-01-10 14:35:54.057673 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-10 14:35:54.057680 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:01.434) 0:02:20.837 ****** 2026-01-10 14:35:54.057686 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.057693 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.057699 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.057705 | orchestrator | 2026-01-10 14:35:54.057714 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-10 14:35:54.057721 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:02.017) 0:02:22.854 ****** 2026-01-10 14:35:54.057727 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057734 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057740 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057746 | orchestrator | 2026-01-10 14:35:54.057752 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-10 14:35:54.057759 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:00.692) 0:02:23.547 ****** 2026-01-10 14:35:54.057765 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.057771 | orchestrator | 2026-01-10 14:35:54.057778 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-10 14:35:54.057784 | orchestrator | Saturday 10 January 2026 14:31:41 +0000 (0:00:00.938) 0:02:24.485 ****** 2026-01-10 14:35:54.057795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:35:54.057803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:35:54.057810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:35:54.057842 | orchestrator | 2026-01-10 14:35:54.057850 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-10 14:35:54.057856 | orchestrator | Saturday 10 January 2026 14:31:45 +0000 (0:00:04.117) 0:02:28.603 ****** 2026-01-10 14:35:54.057864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:35:54.057874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:35:54.057900 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057907 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:35:54.057921 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.057927 | orchestrator | 2026-01-10 14:35:54.057938 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-10 14:35:54.057945 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:00.730) 0:02:29.333 ****** 2026-01-10 14:35:54.057951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.057959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.057966 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.057973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.057979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.057986 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.057993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.058004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-10 14:35:54.058011 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058058 | orchestrator | 2026-01-10 14:35:54.058065 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-10 14:35:54.058072 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:00.691) 0:02:30.025 ****** 2026-01-10 14:35:54.058078 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.058084 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.058091 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.058097 | orchestrator | 2026-01-10 14:35:54.058103 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-10 14:35:54.058109 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:01.316) 0:02:31.341 ****** 2026-01-10 14:35:54.058116 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.058122 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.058128 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.058133 | orchestrator | 2026-01-10 14:35:54.058139 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-10 14:35:54.058144 | orchestrator | Saturday 10 January 2026 14:31:50 +0000 (0:00:02.193) 0:02:33.535 ****** 2026-01-10 14:35:54.058150 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058155 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058162 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058168 | orchestrator | 2026-01-10 14:35:54.058174 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-10 14:35:54.058180 | orchestrator | Saturday 10 January 2026 14:31:51 +0000 (0:00:00.528) 0:02:34.063 ****** 2026-01-10 14:35:54.058187 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.058194 | orchestrator | 2026-01-10 14:35:54.058201 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-10 14:35:54.058208 | orchestrator | Saturday 10 January 2026 14:31:52 +0000 (0:00:00.950) 0:02:35.013 ****** 2026-01-10 14:35:54.058225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:35:54.058239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:35:54.058256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:35:54.058269 | orchestrator | 2026-01-10 14:35:54.058276 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-10 14:35:54.058283 | orchestrator | Saturday 10 January 2026 14:31:55 +0000 (0:00:03.509) 0:02:38.523 ****** 2026-01-10 14:35:54.058294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:35:54.058302 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:35:54.058326 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:35:54.058347 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058353 | orchestrator | 2026-01-10 14:35:54.058359 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-10 14:35:54.058366 | orchestrator | Saturday 10 January 2026 14:31:57 +0000 (0:00:01.376) 0:02:39.899 ****** 2026-01-10 14:35:54.058377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:35:54.058433 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:35:54.058467 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-10 14:35:54.058506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:35:54.058519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:35:54.058528 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058535 | orchestrator | 2026-01-10 14:35:54.058542 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-10 14:35:54.058549 | orchestrator | Saturday 10 January 2026 14:31:58 +0000 (0:00:00.974) 0:02:40.874 ****** 2026-01-10 14:35:54.058555 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.058562 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.058569 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.058575 | orchestrator | 2026-01-10 14:35:54.058582 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-10 14:35:54.058589 | orchestrator | Saturday 10 January 2026 14:31:59 +0000 (0:00:01.352) 0:02:42.227 ****** 2026-01-10 14:35:54.058595 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.058601 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.058608 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.058614 | orchestrator | 2026-01-10 14:35:54.058620 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-10 14:35:54.058627 | orchestrator | Saturday 10 January 2026 14:32:01 +0000 (0:00:02.369) 0:02:44.597 ****** 2026-01-10 14:35:54.058633 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058639 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058646 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058652 | orchestrator | 2026-01-10 14:35:54.058658 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-10 14:35:54.058665 | orchestrator | Saturday 10 January 2026 14:32:02 +0000 (0:00:00.355) 0:02:44.952 ****** 2026-01-10 14:35:54.058671 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058678 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058684 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058690 | orchestrator | 2026-01-10 14:35:54.058696 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-10 14:35:54.058703 | orchestrator | Saturday 10 January 2026 14:32:02 +0000 (0:00:00.646) 0:02:45.599 ****** 2026-01-10 14:35:54.058709 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.058715 | orchestrator | 2026-01-10 14:35:54.058721 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-10 14:35:54.058727 | orchestrator | Saturday 10 January 2026 14:32:03 +0000 (0:00:00.998) 0:02:46.598 ****** 2026-01-10 14:35:54.058734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:35:54.058750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:35:54.058778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:35:54.058807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058845 | orchestrator | 2026-01-10 14:35:54.058852 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-10 14:35:54.058859 | orchestrator | Saturday 10 January 2026 14:32:07 +0000 (0:00:03.743) 0:02:50.341 ****** 2026-01-10 14:35:54.058865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:35:54.058872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058890 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.058901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:35:54.058912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058925 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.058932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:35:54.058939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:35:54.058952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:35:54.058959 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.058966 | orchestrator | 2026-01-10 14:35:54.058976 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-10 14:35:54.058983 | orchestrator | Saturday 10 January 2026 14:32:08 +0000 (0:00:01.045) 0:02:51.386 ****** 2026-01-10 14:35:54.058990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.058998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.059005 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.059016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.059024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.059031 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.059038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.059141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-10 14:35:54.059148 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.059154 | orchestrator | 2026-01-10 14:35:54.059160 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-10 14:35:54.059167 | orchestrator | Saturday 10 January 2026 14:32:09 +0000 (0:00:00.864) 0:02:52.251 ****** 2026-01-10 14:35:54.059173 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.059180 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.059187 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.059193 | orchestrator | 2026-01-10 14:35:54.059201 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-10 14:35:54.059208 | orchestrator | Saturday 10 January 2026 14:32:10 +0000 (0:00:01.410) 0:02:53.661 ****** 2026-01-10 14:35:54.059220 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.059226 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.059233 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.059239 | orchestrator | 2026-01-10 14:35:54.059246 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-10 14:35:54.059252 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:02.353) 0:02:56.014 ****** 2026-01-10 14:35:54.059258 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.059266 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.059273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.059280 | orchestrator | 2026-01-10 14:35:54.059286 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-10 14:35:54.059293 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.586) 0:02:56.601 ****** 2026-01-10 14:35:54.059300 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.059307 | orchestrator | 2026-01-10 14:35:54.059313 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-10 14:35:54.059320 | orchestrator | Saturday 10 January 2026 14:32:15 +0000 (0:00:01.133) 0:02:57.734 ****** 2026-01-10 14:35:54.059328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:35:54.059341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:35:54.059353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:35:54.059384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059391 | orchestrator | 2026-01-10 14:35:54.059398 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-10 14:35:54.059405 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:03.844) 0:03:01.579 ****** 2026-01-10 14:35:54.059417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:35:54.059430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059441 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.059447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:35:54.059454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059460 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.059473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:35:54.059481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059488 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.059495 | orchestrator | 2026-01-10 14:35:54.059506 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-10 14:35:54.059513 | orchestrator | Saturday 10 January 2026 14:32:19 +0000 (0:00:00.974) 0:03:02.553 ****** 2026-01-10 14:35:54.059520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059540 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.059546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059558 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.059564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-10 14:35:54.059577 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.059583 | orchestrator | 2026-01-10 14:35:54.059595 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-10 14:35:54.059603 | orchestrator | Saturday 10 January 2026 14:32:20 +0000 (0:00:00.853) 0:03:03.407 ****** 2026-01-10 14:35:54.059611 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.059618 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.059625 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.059632 | orchestrator | 2026-01-10 14:35:54.059661 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-10 14:35:54.059669 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:01.171) 0:03:04.579 ****** 2026-01-10 14:35:54.059675 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.059681 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.059688 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.059694 | orchestrator | 2026-01-10 14:35:54.059700 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-10 14:35:54.059706 | orchestrator | Saturday 10 January 2026 14:32:23 +0000 (0:00:01.963) 0:03:06.542 ****** 2026-01-10 14:35:54.059712 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.059719 | orchestrator | 2026-01-10 14:35:54.059725 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-10 14:35:54.059731 | orchestrator | Saturday 10 January 2026 14:32:24 +0000 (0:00:01.143) 0:03:07.685 ****** 2026-01-10 14:35:54.059741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:35:54.059749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:35:54.059787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-10 14:35:54.059884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059906 | orchestrator | 2026-01-10 14:35:54.059912 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-10 14:35:54.059919 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:03.656) 0:03:11.342 ****** 2026-01-10 14:35:54.059930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:35:54.059942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059967 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.059973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:35:54.059980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.059987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.060003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.060010 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-10 14:35:54.060027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.060034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.060041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.060048 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060055 | orchestrator | 2026-01-10 14:35:54.060061 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-10 14:35:54.060068 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:00.712) 0:03:12.055 ****** 2026-01-10 14:35:54.060079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060092 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060117 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-10 14:35:54.060137 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060143 | orchestrator | 2026-01-10 14:35:54.060149 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-10 14:35:54.060155 | orchestrator | Saturday 10 January 2026 14:32:30 +0000 (0:00:01.284) 0:03:13.339 ****** 2026-01-10 14:35:54.060161 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.060172 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.060179 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.060186 | orchestrator | 2026-01-10 14:35:54.060193 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-10 14:35:54.060200 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:01.457) 0:03:14.797 ****** 2026-01-10 14:35:54.060207 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.060213 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.060220 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.060227 | orchestrator | 2026-01-10 14:35:54.060234 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-10 14:35:54.060241 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:02.431) 0:03:17.229 ****** 2026-01-10 14:35:54.060247 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.060254 | orchestrator | 2026-01-10 14:35:54.060260 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-10 14:35:54.060267 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:01.407) 0:03:18.637 ****** 2026-01-10 14:35:54.060274 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:35:54.060282 | orchestrator | 2026-01-10 14:35:54.060288 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-10 14:35:54.060295 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:03.134) 0:03:21.772 ****** 2026-01-10 14:35:54.060303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060564 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060583 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060620 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060627 | orchestrator | 2026-01-10 14:35:54.060633 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-10 14:35:54.060640 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:02.815) 0:03:24.587 ****** 2026-01-10 14:35:54.060647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060667 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060695 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:35:54.060716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:35:54.060723 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060729 | orchestrator | 2026-01-10 14:35:54.060735 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-10 14:35:54.060741 | orchestrator | Saturday 10 January 2026 14:32:44 +0000 (0:00:02.718) 0:03:27.306 ****** 2026-01-10 14:35:54.060751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060770 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060798 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:35:54.060847 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060854 | orchestrator | 2026-01-10 14:35:54.060860 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-10 14:35:54.060867 | orchestrator | Saturday 10 January 2026 14:32:47 +0000 (0:00:02.515) 0:03:29.821 ****** 2026-01-10 14:35:54.060873 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.060879 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.060885 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.060892 | orchestrator | 2026-01-10 14:35:54.060898 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-10 14:35:54.060905 | orchestrator | Saturday 10 January 2026 14:32:48 +0000 (0:00:01.534) 0:03:31.355 ****** 2026-01-10 14:35:54.060911 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060917 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060924 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060930 | orchestrator | 2026-01-10 14:35:54.060937 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-10 14:35:54.060943 | orchestrator | Saturday 10 January 2026 14:32:50 +0000 (0:00:01.464) 0:03:32.820 ****** 2026-01-10 14:35:54.060953 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.060960 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.060966 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.060972 | orchestrator | 2026-01-10 14:35:54.060978 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-10 14:35:54.060990 | orchestrator | Saturday 10 January 2026 14:32:50 +0000 (0:00:00.340) 0:03:33.161 ****** 2026-01-10 14:35:54.060996 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.061002 | orchestrator | 2026-01-10 14:35:54.061009 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-10 14:35:54.061015 | orchestrator | Saturday 10 January 2026 14:32:51 +0000 (0:00:01.420) 0:03:34.582 ****** 2026-01-10 14:35:54.061022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:35:54.061029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:35:54.061036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:35:54.061043 | orchestrator | 2026-01-10 14:35:54.061050 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-10 14:35:54.061056 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:01.521) 0:03:36.104 ****** 2026-01-10 14:35:54.061066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:35:54.061077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:35:54.061138 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.061144 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.061150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:35:54.061156 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.061162 | orchestrator | 2026-01-10 14:35:54.061378 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-10 14:35:54.061389 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:00.405) 0:03:36.509 ****** 2026-01-10 14:35:54.061397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:35:54.061406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:35:54.061413 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.061421 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.061428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:35:54.061436 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.061443 | orchestrator | 2026-01-10 14:35:54.061450 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-10 14:35:54.061456 | orchestrator | Saturday 10 January 2026 14:32:54 +0000 (0:00:00.914) 0:03:37.424 ****** 2026-01-10 14:35:54.061463 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.061469 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.061476 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.061482 | orchestrator | 2026-01-10 14:35:54.061489 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-10 14:35:54.061496 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:00.459) 0:03:37.883 ****** 2026-01-10 14:35:54.061502 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.061509 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.061515 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.061521 | orchestrator | 2026-01-10 14:35:54.061527 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-10 14:35:54.061533 | orchestrator | Saturday 10 January 2026 14:32:56 +0000 (0:00:01.365) 0:03:39.249 ****** 2026-01-10 14:35:54.061545 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.061553 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.061559 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.061566 | orchestrator | 2026-01-10 14:35:54.061576 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-10 14:35:54.061582 | orchestrator | Saturday 10 January 2026 14:32:56 +0000 (0:00:00.351) 0:03:39.600 ****** 2026-01-10 14:35:54.061589 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.061595 | orchestrator | 2026-01-10 14:35:54.061602 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-10 14:35:54.061609 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:01.578) 0:03:41.179 ****** 2026-01-10 14:35:54.061623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:35:54.061630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.061673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.061694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.061703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.061717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.061741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.061752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.061767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.061774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:35:54.061788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.061833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.061840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:35:54.062231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.062263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.062301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.062389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062396 | orchestrator | 2026-01-10 14:35:54.062402 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-10 14:35:54.062408 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:04.415) 0:03:45.595 ****** 2026-01-10 14:35:54.062419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:35:54.062426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.062466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:35:54.062637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:35:54.062667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.062866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.062876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-10 14:35:54.062896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062904 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.062912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.062975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.062982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.062993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.063012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.063056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.063068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.063081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.063093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.063106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-10 14:35:54.063124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.063139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-10 14:35:54.063150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.063156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.063163 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.063170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:35:54.063183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:35:54.063190 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.063196 | orchestrator | 2026-01-10 14:35:54.063202 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-10 14:35:54.063208 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:01.424) 0:03:47.019 ****** 2026-01-10 14:35:54.063216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063243 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.063253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063267 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.063274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-10 14:35:54.063287 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.063294 | orchestrator | 2026-01-10 14:35:54.063301 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-10 14:35:54.063307 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:02.141) 0:03:49.161 ****** 2026-01-10 14:35:54.063314 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.063320 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.063326 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.063332 | orchestrator | 2026-01-10 14:35:54.063339 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-10 14:35:54.063345 | orchestrator | Saturday 10 January 2026 14:33:07 +0000 (0:00:01.353) 0:03:50.514 ****** 2026-01-10 14:35:54.063351 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.063357 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.063363 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.063369 | orchestrator | 2026-01-10 14:35:54.063375 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-10 14:35:54.063382 | orchestrator | Saturday 10 January 2026 14:33:09 +0000 (0:00:02.173) 0:03:52.688 ****** 2026-01-10 14:35:54.063388 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.063394 | orchestrator | 2026-01-10 14:35:54.063400 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-10 14:35:54.063406 | orchestrator | Saturday 10 January 2026 14:33:11 +0000 (0:00:01.221) 0:03:53.910 ****** 2026-01-10 14:35:54.063413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.063424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.063440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.063446 | orchestrator | 2026-01-10 14:35:54.063452 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-10 14:35:54.063459 | orchestrator | Saturday 10 January 2026 14:33:14 +0000 (0:00:03.760) 0:03:57.670 ****** 2026-01-10 14:35:54.063465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.063472 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.063478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.063484 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.063494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.063505 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.063511 | orchestrator | 2026-01-10 14:35:54.063518 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-10 14:35:54.063524 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:00.624) 0:03:58.294 ****** 2026-01-10 14:35:54.063530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063544 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.063870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063897 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.063904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-10 14:35:54.063918 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.063925 | orchestrator | 2026-01-10 14:35:54.063931 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-10 14:35:54.063938 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:00.758) 0:03:59.053 ****** 2026-01-10 14:35:54.063944 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.063951 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.063957 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.063964 | orchestrator | 2026-01-10 14:35:54.063972 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-10 14:35:54.063979 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:01.996) 0:04:01.050 ****** 2026-01-10 14:35:54.063986 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.063992 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.063999 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.064005 | orchestrator | 2026-01-10 14:35:54.064011 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-10 14:35:54.064018 | orchestrator | Saturday 10 January 2026 14:33:20 +0000 (0:00:01.810) 0:04:02.860 ****** 2026-01-10 14:35:54.064025 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.064031 | orchestrator | 2026-01-10 14:35:54.064038 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-10 14:35:54.064053 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:01.687) 0:04:04.547 ****** 2026-01-10 14:35:54.064061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.064075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.064104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.064136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064148 | orchestrator | 2026-01-10 14:35:54.064154 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-10 14:35:54.064160 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:04.259) 0:04:08.806 ****** 2026-01-10 14:35:54.064167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.064177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064193 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.064209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.064218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064236 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.064243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.064253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.064271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.064278 | orchestrator | 2026-01-10 14:35:54.064284 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-10 14:35:54.064290 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:01.392) 0:04:10.199 ****** 2026-01-10 14:35:54.064296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064328 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.064340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064369 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.064376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-10 14:35:54.064409 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.064416 | orchestrator | 2026-01-10 14:35:54.064422 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-10 14:35:54.064429 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.954) 0:04:11.154 ****** 2026-01-10 14:35:54.064435 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.064441 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.064448 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.064454 | orchestrator | 2026-01-10 14:35:54.064461 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-10 14:35:54.064468 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:01.447) 0:04:12.601 ****** 2026-01-10 14:35:54.064475 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.064482 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.064488 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.064495 | orchestrator | 2026-01-10 14:35:54.064501 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-10 14:35:54.064508 | orchestrator | Saturday 10 January 2026 14:33:31 +0000 (0:00:02.082) 0:04:14.683 ****** 2026-01-10 14:35:54.064515 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.064521 | orchestrator | 2026-01-10 14:35:54.064528 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-10 14:35:54.064538 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:01.618) 0:04:16.301 ****** 2026-01-10 14:35:54.064549 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-10 14:35:54.064557 | orchestrator | 2026-01-10 14:35:54.064563 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-10 14:35:54.064570 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.881) 0:04:17.183 ****** 2026-01-10 14:35:54.064577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:35:54.064584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:35:54.064591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:35:54.064598 | orchestrator | 2026-01-10 14:35:54.064652 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-10 14:35:54.064660 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:04.752) 0:04:21.936 ****** 2026-01-10 14:35:54.064668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.064675 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.064682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.064689 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.064700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.064711 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.064718 | orchestrator | 2026-01-10 14:35:54.064725 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-10 14:35:54.064731 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:01.058) 0:04:22.994 ****** 2026-01-10 14:35:54.064740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.064747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.064755 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.064763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.064992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.065005 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.065018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:35:54.065024 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065031 | orchestrator | 2026-01-10 14:35:54.065037 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:35:54.065043 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:01.647) 0:04:24.642 ****** 2026-01-10 14:35:54.065049 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.065055 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.065062 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.065068 | orchestrator | 2026-01-10 14:35:54.065074 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:35:54.065080 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:02.577) 0:04:27.220 ****** 2026-01-10 14:35:54.065086 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.065092 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.065100 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.065107 | orchestrator | 2026-01-10 14:35:54.065114 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-10 14:35:54.065119 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:03.068) 0:04:30.289 ****** 2026-01-10 14:35:54.065126 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-10 14:35:54.065133 | orchestrator | 2026-01-10 14:35:54.065139 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-10 14:35:54.065144 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:01.454) 0:04:31.743 ****** 2026-01-10 14:35:54.065150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065165 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065183 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065202 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065208 | orchestrator | 2026-01-10 14:35:54.065215 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-10 14:35:54.065221 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:01.261) 0:04:33.005 ****** 2026-01-10 14:35:54.065228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065235 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065248 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:35:54.065262 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065268 | orchestrator | 2026-01-10 14:35:54.065275 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-10 14:35:54.065281 | orchestrator | Saturday 10 January 2026 14:33:51 +0000 (0:00:01.403) 0:04:34.408 ****** 2026-01-10 14:35:54.065288 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065299 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065306 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065313 | orchestrator | 2026-01-10 14:35:54.065319 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:35:54.065326 | orchestrator | Saturday 10 January 2026 14:33:53 +0000 (0:00:02.088) 0:04:36.497 ****** 2026-01-10 14:35:54.065332 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.065339 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.065345 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.065350 | orchestrator | 2026-01-10 14:35:54.065356 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:35:54.065362 | orchestrator | Saturday 10 January 2026 14:33:56 +0000 (0:00:02.692) 0:04:39.189 ****** 2026-01-10 14:35:54.065368 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.065378 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.065386 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.065392 | orchestrator | 2026-01-10 14:35:54.065398 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-10 14:35:54.065405 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:03.350) 0:04:42.539 ****** 2026-01-10 14:35:54.065412 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-10 14:35:54.065418 | orchestrator | 2026-01-10 14:35:54.065425 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-10 14:35:54.065436 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.930) 0:04:43.470 ****** 2026-01-10 14:35:54.065443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065450 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065469 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065482 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065489 | orchestrator | 2026-01-10 14:35:54.065495 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-10 14:35:54.065507 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:01.500) 0:04:44.970 ****** 2026-01-10 14:35:54.065516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065529 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065543 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:35:54.065557 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065563 | orchestrator | 2026-01-10 14:35:54.065570 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-10 14:35:54.065577 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:01.470) 0:04:46.441 ****** 2026-01-10 14:35:54.065583 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065589 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.065595 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.065601 | orchestrator | 2026-01-10 14:35:54.065608 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:35:54.065617 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:01.839) 0:04:48.280 ****** 2026-01-10 14:35:54.065624 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.065630 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.065637 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.065644 | orchestrator | 2026-01-10 14:35:54.065650 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:35:54.065657 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:02.667) 0:04:50.948 ****** 2026-01-10 14:35:54.065664 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.065670 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.065677 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.065683 | orchestrator | 2026-01-10 14:35:54.065690 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-10 14:35:54.065696 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:03.595) 0:04:54.543 ****** 2026-01-10 14:35:54.065703 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.065710 | orchestrator | 2026-01-10 14:35:54.065716 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-10 14:35:54.065723 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:01.712) 0:04:56.256 ****** 2026-01-10 14:35:54.065734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.065747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.065755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.065762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.065780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.065802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.065845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.065853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.065863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.065888 | orchestrator | 2026-01-10 14:35:54.065893 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-10 14:35:54.065899 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:03.709) 0:04:59.966 ****** 2026-01-10 14:35:54.065905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.065911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.065923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.065950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.065957 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.065963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.065970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.065987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.066004 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.066047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.066056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:35:54.066064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.066071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:35:54.066331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:35:54.066346 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.066353 | orchestrator | 2026-01-10 14:35:54.066360 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-10 14:35:54.066373 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:00.750) 0:05:00.716 ****** 2026-01-10 14:35:54.066380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066395 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.066406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066420 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.066427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:35:54.066440 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.066447 | orchestrator | 2026-01-10 14:35:54.066453 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-10 14:35:54.066460 | orchestrator | Saturday 10 January 2026 14:34:19 +0000 (0:00:01.647) 0:05:02.364 ****** 2026-01-10 14:35:54.066465 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.066471 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.066477 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.066483 | orchestrator | 2026-01-10 14:35:54.066489 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-10 14:35:54.066498 | orchestrator | Saturday 10 January 2026 14:34:21 +0000 (0:00:01.542) 0:05:03.906 ****** 2026-01-10 14:35:54.066505 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.066512 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.066518 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.066525 | orchestrator | 2026-01-10 14:35:54.066531 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-10 14:35:54.066538 | orchestrator | Saturday 10 January 2026 14:34:23 +0000 (0:00:02.324) 0:05:06.231 ****** 2026-01-10 14:35:54.066545 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.066552 | orchestrator | 2026-01-10 14:35:54.066558 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-10 14:35:54.066565 | orchestrator | Saturday 10 January 2026 14:34:24 +0000 (0:00:01.433) 0:05:07.664 ****** 2026-01-10 14:35:54.066572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:35:54.066590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:35:54.066601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:35:54.066609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:35:54.066617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:35:54.066629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:35:54.066646 | orchestrator | 2026-01-10 14:35:54.066653 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-10 14:35:54.066660 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:05.517) 0:05:13.182 ****** 2026-01-10 14:35:54.066670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:35:54.066678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:35:54.066685 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.066692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:35:54.066708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:35:54.066715 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.066725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:35:54.066732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:35:54.066738 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.066745 | orchestrator | 2026-01-10 14:35:54.066751 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-10 14:35:54.066758 | orchestrator | Saturday 10 January 2026 14:34:31 +0000 (0:00:00.697) 0:05:13.880 ****** 2026-01-10 14:35:54.066765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:35:54.066772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066790 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.066872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:35:54.066879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066892 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.066899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-10 14:35:54.066905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-10 14:35:54.066921 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.066927 | orchestrator | 2026-01-10 14:35:54.066933 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-10 14:35:54.066940 | orchestrator | Saturday 10 January 2026 14:34:32 +0000 (0:00:00.971) 0:05:14.851 ****** 2026-01-10 14:35:54.066946 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.066952 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.066958 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.066965 | orchestrator | 2026-01-10 14:35:54.066972 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-10 14:35:54.066978 | orchestrator | Saturday 10 January 2026 14:34:33 +0000 (0:00:00.926) 0:05:15.778 ****** 2026-01-10 14:35:54.066985 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.066992 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.067258 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.067269 | orchestrator | 2026-01-10 14:35:54.067281 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-10 14:35:54.067288 | orchestrator | Saturday 10 January 2026 14:34:34 +0000 (0:00:01.508) 0:05:17.286 ****** 2026-01-10 14:35:54.067296 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.067302 | orchestrator | 2026-01-10 14:35:54.067309 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-10 14:35:54.067315 | orchestrator | Saturday 10 January 2026 14:34:36 +0000 (0:00:01.470) 0:05:18.757 ****** 2026-01-10 14:35:54.067322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:35:54.067337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.067345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:35:54.067387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.067394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:35:54.067413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.067429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:35:54.067475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.067482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:35:54.067521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.067528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:35:54.067568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.067580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067601 | orchestrator | 2026-01-10 14:35:54.067608 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-10 14:35:54.067614 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:04.721) 0:05:23.478 ****** 2026-01-10 14:35:54.067624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:35:54.067631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.067642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:35:54.067862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.067877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067909 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.067916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:35:54.067923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.067930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.067943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.067955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:35:54.067988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.067995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.068014 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:35:54.068039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:35:54.068049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.068074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:35:54.068085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-10 14:35:54.068099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:35:54.068115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:35:54.068121 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068128 | orchestrator | 2026-01-10 14:35:54.068135 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-10 14:35:54.068141 | orchestrator | Saturday 10 January 2026 14:34:42 +0000 (0:00:01.598) 0:05:25.077 ****** 2026-01-10 14:35:54.068148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068177 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068217 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-10 14:35:54.068241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-10 14:35:54.068259 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068266 | orchestrator | 2026-01-10 14:35:54.068272 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-10 14:35:54.068279 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:01.084) 0:05:26.162 ****** 2026-01-10 14:35:54.068286 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068293 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068299 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068306 | orchestrator | 2026-01-10 14:35:54.068312 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-10 14:35:54.068319 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:00.499) 0:05:26.661 ****** 2026-01-10 14:35:54.068325 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068332 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068338 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068344 | orchestrator | 2026-01-10 14:35:54.068351 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-10 14:35:54.068357 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:01.659) 0:05:28.320 ****** 2026-01-10 14:35:54.068363 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.068369 | orchestrator | 2026-01-10 14:35:54.068376 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-10 14:35:54.068382 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:01.839) 0:05:30.160 ****** 2026-01-10 14:35:54.068388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:35:54.068396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:35:54.068410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:35:54.068418 | orchestrator | 2026-01-10 14:35:54.068426 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-10 14:35:54.068436 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:02.650) 0:05:32.810 ****** 2026-01-10 14:35:54.068443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:35:54.068450 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:35:54.068468 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:35:54.068483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068490 | orchestrator | 2026-01-10 14:35:54.068498 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-10 14:35:54.068505 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.415) 0:05:33.226 ****** 2026-01-10 14:35:54.068511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:35:54.068517 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:35:54.068533 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:35:54.068546 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068553 | orchestrator | 2026-01-10 14:35:54.068560 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-10 14:35:54.068566 | orchestrator | Saturday 10 January 2026 14:34:51 +0000 (0:00:01.070) 0:05:34.297 ****** 2026-01-10 14:35:54.068573 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068584 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068590 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068596 | orchestrator | 2026-01-10 14:35:54.068603 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-10 14:35:54.068609 | orchestrator | Saturday 10 January 2026 14:34:52 +0000 (0:00:00.456) 0:05:34.753 ****** 2026-01-10 14:35:54.068615 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068621 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068627 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068636 | orchestrator | 2026-01-10 14:35:54.068645 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-10 14:35:54.068653 | orchestrator | Saturday 10 January 2026 14:34:53 +0000 (0:00:01.372) 0:05:36.126 ****** 2026-01-10 14:35:54.068660 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:35:54.068667 | orchestrator | 2026-01-10 14:35:54.068673 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-10 14:35:54.068680 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:01.845) 0:05:37.972 ****** 2026-01-10 14:35:54.068687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-10 14:35:54.068747 | orchestrator | 2026-01-10 14:35:54.068753 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-10 14:35:54.068759 | orchestrator | Saturday 10 January 2026 14:35:01 +0000 (0:00:06.430) 0:05:44.403 ****** 2026-01-10 14:35:54.068769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http',2026-01-10 14:35:54 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:35:54.068786 | orchestrator | 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068793 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068839 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-10 14:35:54.068862 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068868 | orchestrator | 2026-01-10 14:35:54.068878 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-10 14:35:54.068884 | orchestrator | Saturday 10 January 2026 14:35:02 +0000 (0:00:00.714) 0:05:45.117 ****** 2026-01-10 14:35:54.068891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068925 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.068931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068957 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.068963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-10 14:35:54.068986 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.068992 | orchestrator | 2026-01-10 14:35:54.068998 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-10 14:35:54.069004 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:02.010) 0:05:47.128 ****** 2026-01-10 14:35:54.069010 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.069020 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.069026 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.069032 | orchestrator | 2026-01-10 14:35:54.069039 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-10 14:35:54.069045 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:01.500) 0:05:48.629 ****** 2026-01-10 14:35:54.069051 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.069057 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.069063 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.069069 | orchestrator | 2026-01-10 14:35:54.069075 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-10 14:35:54.069086 | orchestrator | Saturday 10 January 2026 14:35:08 +0000 (0:00:02.425) 0:05:51.055 ****** 2026-01-10 14:35:54.069093 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069099 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069105 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069111 | orchestrator | 2026-01-10 14:35:54.069117 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-10 14:35:54.069123 | orchestrator | Saturday 10 January 2026 14:35:08 +0000 (0:00:00.353) 0:05:51.409 ****** 2026-01-10 14:35:54.069130 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069136 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069143 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069148 | orchestrator | 2026-01-10 14:35:54.069158 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-10 14:35:54.069164 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:00.325) 0:05:51.734 ****** 2026-01-10 14:35:54.069171 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069177 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069184 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069191 | orchestrator | 2026-01-10 14:35:54.069197 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-10 14:35:54.069204 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:00.684) 0:05:52.419 ****** 2026-01-10 14:35:54.069211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069216 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069223 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069230 | orchestrator | 2026-01-10 14:35:54.069236 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-10 14:35:54.069243 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.345) 0:05:52.764 ****** 2026-01-10 14:35:54.069250 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069257 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069263 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069270 | orchestrator | 2026-01-10 14:35:54.069276 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-10 14:35:54.069283 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.395) 0:05:53.160 ****** 2026-01-10 14:35:54.069290 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069296 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069303 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069310 | orchestrator | 2026-01-10 14:35:54.069317 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-10 14:35:54.069324 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:00.965) 0:05:54.125 ****** 2026-01-10 14:35:54.069331 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069338 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069344 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069351 | orchestrator | 2026-01-10 14:35:54.069358 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-10 14:35:54.069364 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:00.794) 0:05:54.920 ****** 2026-01-10 14:35:54.069371 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069377 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069383 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069390 | orchestrator | 2026-01-10 14:35:54.069396 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-10 14:35:54.069403 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:00.369) 0:05:55.289 ****** 2026-01-10 14:35:54.069410 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069417 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069423 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069429 | orchestrator | 2026-01-10 14:35:54.069436 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-10 14:35:54.069442 | orchestrator | Saturday 10 January 2026 14:35:13 +0000 (0:00:01.036) 0:05:56.326 ****** 2026-01-10 14:35:54.069453 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069460 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069466 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069473 | orchestrator | 2026-01-10 14:35:54.069480 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-10 14:35:54.069486 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:01.335) 0:05:57.662 ****** 2026-01-10 14:35:54.069493 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069499 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069506 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069512 | orchestrator | 2026-01-10 14:35:54.069519 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-10 14:35:54.069525 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.972) 0:05:58.634 ****** 2026-01-10 14:35:54.069532 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.069538 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.069545 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.069552 | orchestrator | 2026-01-10 14:35:54.069558 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-10 14:35:54.069565 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:04.833) 0:06:03.467 ****** 2026-01-10 14:35:54.069571 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069578 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069584 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069590 | orchestrator | 2026-01-10 14:35:54.069596 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-10 14:35:54.069603 | orchestrator | Saturday 10 January 2026 14:35:23 +0000 (0:00:02.806) 0:06:06.274 ****** 2026-01-10 14:35:54.069610 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.069617 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.069623 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.069630 | orchestrator | 2026-01-10 14:35:54.069640 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-10 14:35:54.069647 | orchestrator | Saturday 10 January 2026 14:35:32 +0000 (0:00:09.172) 0:06:15.446 ****** 2026-01-10 14:35:54.069654 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.069660 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.069666 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.069672 | orchestrator | 2026-01-10 14:35:54.069678 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-10 14:35:54.069685 | orchestrator | Saturday 10 January 2026 14:35:36 +0000 (0:00:04.230) 0:06:19.676 ****** 2026-01-10 14:35:54.069696 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:35:54.069702 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:35:54.069708 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:35:54.069715 | orchestrator | 2026-01-10 14:35:54.069722 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-10 14:35:54.069728 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:09.462) 0:06:29.139 ****** 2026-01-10 14:35:54.069735 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069742 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069748 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069755 | orchestrator | 2026-01-10 14:35:54.069761 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-10 14:35:54.069772 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:00.357) 0:06:29.496 ****** 2026-01-10 14:35:54.069779 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069785 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069791 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069797 | orchestrator | 2026-01-10 14:35:54.069803 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-10 14:35:54.069809 | orchestrator | Saturday 10 January 2026 14:35:47 +0000 (0:00:00.408) 0:06:29.905 ****** 2026-01-10 14:35:54.069836 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069849 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069863 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069869 | orchestrator | 2026-01-10 14:35:54.069876 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-10 14:35:54.069883 | orchestrator | Saturday 10 January 2026 14:35:47 +0000 (0:00:00.764) 0:06:30.669 ****** 2026-01-10 14:35:54.069889 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069896 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069903 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069909 | orchestrator | 2026-01-10 14:35:54.069916 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-10 14:35:54.069922 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.361) 0:06:31.031 ****** 2026-01-10 14:35:54.069929 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069935 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069941 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069948 | orchestrator | 2026-01-10 14:35:54.069954 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-10 14:35:54.069960 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.357) 0:06:31.388 ****** 2026-01-10 14:35:54.069967 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:35:54.069973 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:35:54.069979 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:35:54.069986 | orchestrator | 2026-01-10 14:35:54.069992 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-10 14:35:54.069998 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:00.357) 0:06:31.746 ****** 2026-01-10 14:35:54.070005 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.070043 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.070051 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.070058 | orchestrator | 2026-01-10 14:35:54.070064 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-10 14:35:54.070070 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:01.416) 0:06:33.163 ****** 2026-01-10 14:35:54.070077 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:35:54.070083 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:35:54.070089 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:35:54.070095 | orchestrator | 2026-01-10 14:35:54.070102 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:35:54.070108 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:35:54.070116 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:35:54.070123 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-10 14:35:54.070129 | orchestrator | 2026-01-10 14:35:54.070135 | orchestrator | 2026-01-10 14:35:54.070141 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:35:54.070147 | orchestrator | Saturday 10 January 2026 14:35:51 +0000 (0:00:00.893) 0:06:34.056 ****** 2026-01-10 14:35:54.070153 | orchestrator | =============================================================================== 2026-01-10 14:35:54.070159 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.46s 2026-01-10 14:35:54.070165 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.17s 2026-01-10 14:35:54.070171 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.43s 2026-01-10 14:35:54.070178 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.32s 2026-01-10 14:35:54.070185 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.04s 2026-01-10 14:35:54.070191 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.52s 2026-01-10 14:35:54.070197 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.08s 2026-01-10 14:35:54.070213 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.04s 2026-01-10 14:35:54.070220 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.83s 2026-01-10 14:35:54.070226 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.79s 2026-01-10 14:35:54.070233 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.75s 2026-01-10 14:35:54.070239 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.75s 2026-01-10 14:35:54.070246 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.72s 2026-01-10 14:35:54.070252 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.57s 2026-01-10 14:35:54.070258 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.42s 2026-01-10 14:35:54.070265 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.26s 2026-01-10 14:35:54.070272 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.23s 2026-01-10 14:35:54.070278 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.12s 2026-01-10 14:35:54.070290 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.11s 2026-01-10 14:35:54.070297 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.01s 2026-01-10 14:35:54.070304 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:54.070312 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:35:54.070318 | orchestrator | 2026-01-10 14:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:57.101548 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:35:57.102808 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:35:57.104526 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:35:57.104570 | orchestrator | 2026-01-10 14:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:00.156528 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:00.159692 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:00.161885 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:00.161926 | orchestrator | 2026-01-10 14:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:03.204057 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:03.204180 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:03.204189 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:03.204194 | orchestrator | 2026-01-10 14:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:06.279907 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:06.280013 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:06.280028 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:06.280067 | orchestrator | 2026-01-10 14:36:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:09.311714 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:09.313443 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:09.316021 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:09.316068 | orchestrator | 2026-01-10 14:36:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:12.345106 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:12.346927 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:12.349031 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:12.350072 | orchestrator | 2026-01-10 14:36:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:15.385412 | orchestrator | 2026-01-10 14:36:15 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:15.385924 | orchestrator | 2026-01-10 14:36:15 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:15.386579 | orchestrator | 2026-01-10 14:36:15 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:15.386639 | orchestrator | 2026-01-10 14:36:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:18.425772 | orchestrator | 2026-01-10 14:36:18 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:18.426187 | orchestrator | 2026-01-10 14:36:18 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:18.427105 | orchestrator | 2026-01-10 14:36:18 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:18.427133 | orchestrator | 2026-01-10 14:36:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:21.464184 | orchestrator | 2026-01-10 14:36:21 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:21.465682 | orchestrator | 2026-01-10 14:36:21 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:21.467189 | orchestrator | 2026-01-10 14:36:21 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:21.467254 | orchestrator | 2026-01-10 14:36:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:24.509046 | orchestrator | 2026-01-10 14:36:24 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:24.511769 | orchestrator | 2026-01-10 14:36:24 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:24.514610 | orchestrator | 2026-01-10 14:36:24 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:24.514689 | orchestrator | 2026-01-10 14:36:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:27.553647 | orchestrator | 2026-01-10 14:36:27 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:27.558979 | orchestrator | 2026-01-10 14:36:27 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:27.566225 | orchestrator | 2026-01-10 14:36:27 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:27.566338 | orchestrator | 2026-01-10 14:36:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:30.623071 | orchestrator | 2026-01-10 14:36:30 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:30.624361 | orchestrator | 2026-01-10 14:36:30 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:30.626889 | orchestrator | 2026-01-10 14:36:30 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:30.627331 | orchestrator | 2026-01-10 14:36:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:33.670377 | orchestrator | 2026-01-10 14:36:33 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:33.670466 | orchestrator | 2026-01-10 14:36:33 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:33.670474 | orchestrator | 2026-01-10 14:36:33 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:33.670481 | orchestrator | 2026-01-10 14:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:36.731673 | orchestrator | 2026-01-10 14:36:36 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:36.732098 | orchestrator | 2026-01-10 14:36:36 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:36.734182 | orchestrator | 2026-01-10 14:36:36 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:36.734485 | orchestrator | 2026-01-10 14:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:39.795490 | orchestrator | 2026-01-10 14:36:39 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:39.797056 | orchestrator | 2026-01-10 14:36:39 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:39.799928 | orchestrator | 2026-01-10 14:36:39 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:39.800019 | orchestrator | 2026-01-10 14:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:42.843267 | orchestrator | 2026-01-10 14:36:42 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:42.843667 | orchestrator | 2026-01-10 14:36:42 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:42.846240 | orchestrator | 2026-01-10 14:36:42 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:42.846687 | orchestrator | 2026-01-10 14:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:45.892185 | orchestrator | 2026-01-10 14:36:45 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:45.892326 | orchestrator | 2026-01-10 14:36:45 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:45.895110 | orchestrator | 2026-01-10 14:36:45 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:45.895178 | orchestrator | 2026-01-10 14:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:48.943994 | orchestrator | 2026-01-10 14:36:48 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:48.945299 | orchestrator | 2026-01-10 14:36:48 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:48.946576 | orchestrator | 2026-01-10 14:36:48 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:48.946637 | orchestrator | 2026-01-10 14:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:52.016936 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:52.019472 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:52.026808 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:52.026888 | orchestrator | 2026-01-10 14:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:55.083000 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:55.083095 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:55.084703 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:55.084769 | orchestrator | 2026-01-10 14:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:58.136057 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:36:58.138163 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:36:58.140215 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:36:58.140342 | orchestrator | 2026-01-10 14:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:01.204399 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:01.207318 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:01.209682 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:01.209834 | orchestrator | 2026-01-10 14:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:04.261276 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:04.268394 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:04.270969 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:04.271036 | orchestrator | 2026-01-10 14:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:07.324605 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:07.325700 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:07.328479 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:07.328535 | orchestrator | 2026-01-10 14:37:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:10.372944 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:10.375487 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:10.378850 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:10.379130 | orchestrator | 2026-01-10 14:37:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:13.433682 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:13.436512 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:13.438789 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:13.438853 | orchestrator | 2026-01-10 14:37:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:16.489106 | orchestrator | 2026-01-10 14:37:16 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:16.490819 | orchestrator | 2026-01-10 14:37:16 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:16.492218 | orchestrator | 2026-01-10 14:37:16 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:16.492262 | orchestrator | 2026-01-10 14:37:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:19.568333 | orchestrator | 2026-01-10 14:37:19 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:19.568837 | orchestrator | 2026-01-10 14:37:19 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:19.569779 | orchestrator | 2026-01-10 14:37:19 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:19.569790 | orchestrator | 2026-01-10 14:37:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:22.614670 | orchestrator | 2026-01-10 14:37:22 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:22.614881 | orchestrator | 2026-01-10 14:37:22 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:22.617402 | orchestrator | 2026-01-10 14:37:22 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:22.618697 | orchestrator | 2026-01-10 14:37:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:25.678704 | orchestrator | 2026-01-10 14:37:25 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:25.679682 | orchestrator | 2026-01-10 14:37:25 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:25.681358 | orchestrator | 2026-01-10 14:37:25 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:25.681403 | orchestrator | 2026-01-10 14:37:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:28.734100 | orchestrator | 2026-01-10 14:37:28 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:28.735012 | orchestrator | 2026-01-10 14:37:28 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:28.737838 | orchestrator | 2026-01-10 14:37:28 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:28.737998 | orchestrator | 2026-01-10 14:37:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:31.794266 | orchestrator | 2026-01-10 14:37:31 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:31.796842 | orchestrator | 2026-01-10 14:37:31 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:31.800372 | orchestrator | 2026-01-10 14:37:31 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:31.800440 | orchestrator | 2026-01-10 14:37:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:34.847229 | orchestrator | 2026-01-10 14:37:34 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:34.848649 | orchestrator | 2026-01-10 14:37:34 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:34.849510 | orchestrator | 2026-01-10 14:37:34 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:34.849576 | orchestrator | 2026-01-10 14:37:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:37.886424 | orchestrator | 2026-01-10 14:37:37 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:37.888783 | orchestrator | 2026-01-10 14:37:37 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:37.890268 | orchestrator | 2026-01-10 14:37:37 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:37.891629 | orchestrator | 2026-01-10 14:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:40.937504 | orchestrator | 2026-01-10 14:37:40 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:40.939367 | orchestrator | 2026-01-10 14:37:40 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:40.941830 | orchestrator | 2026-01-10 14:37:40 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:40.941892 | orchestrator | 2026-01-10 14:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:43.987166 | orchestrator | 2026-01-10 14:37:43 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:43.990519 | orchestrator | 2026-01-10 14:37:43 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:43.994484 | orchestrator | 2026-01-10 14:37:43 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:43.994560 | orchestrator | 2026-01-10 14:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:47.038191 | orchestrator | 2026-01-10 14:37:47 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:47.038381 | orchestrator | 2026-01-10 14:37:47 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:47.040949 | orchestrator | 2026-01-10 14:37:47 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:47.041015 | orchestrator | 2026-01-10 14:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:50.094353 | orchestrator | 2026-01-10 14:37:50 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:50.096557 | orchestrator | 2026-01-10 14:37:50 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:50.099150 | orchestrator | 2026-01-10 14:37:50 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:50.099218 | orchestrator | 2026-01-10 14:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:53.154899 | orchestrator | 2026-01-10 14:37:53 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:53.155002 | orchestrator | 2026-01-10 14:37:53 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:53.156001 | orchestrator | 2026-01-10 14:37:53 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:53.156032 | orchestrator | 2026-01-10 14:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:56.211265 | orchestrator | 2026-01-10 14:37:56 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:56.211837 | orchestrator | 2026-01-10 14:37:56 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:56.213508 | orchestrator | 2026-01-10 14:37:56 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:56.213583 | orchestrator | 2026-01-10 14:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:59.266654 | orchestrator | 2026-01-10 14:37:59 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:37:59.267062 | orchestrator | 2026-01-10 14:37:59 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:37:59.268185 | orchestrator | 2026-01-10 14:37:59 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:37:59.268215 | orchestrator | 2026-01-10 14:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:02.327117 | orchestrator | 2026-01-10 14:38:02 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:02.330165 | orchestrator | 2026-01-10 14:38:02 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:02.332378 | orchestrator | 2026-01-10 14:38:02 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:02.332441 | orchestrator | 2026-01-10 14:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:05.381388 | orchestrator | 2026-01-10 14:38:05 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:05.383972 | orchestrator | 2026-01-10 14:38:05 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:05.388640 | orchestrator | 2026-01-10 14:38:05 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:05.388738 | orchestrator | 2026-01-10 14:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:08.437047 | orchestrator | 2026-01-10 14:38:08 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:08.440158 | orchestrator | 2026-01-10 14:38:08 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:08.442050 | orchestrator | 2026-01-10 14:38:08 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:08.442108 | orchestrator | 2026-01-10 14:38:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:11.492463 | orchestrator | 2026-01-10 14:38:11 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:11.496569 | orchestrator | 2026-01-10 14:38:11 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:11.498570 | orchestrator | 2026-01-10 14:38:11 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:11.498642 | orchestrator | 2026-01-10 14:38:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:14.554326 | orchestrator | 2026-01-10 14:38:14 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:14.555836 | orchestrator | 2026-01-10 14:38:14 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:14.558583 | orchestrator | 2026-01-10 14:38:14 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:14.558654 | orchestrator | 2026-01-10 14:38:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:17.603858 | orchestrator | 2026-01-10 14:38:17 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:17.605900 | orchestrator | 2026-01-10 14:38:17 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:17.608729 | orchestrator | 2026-01-10 14:38:17 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:17.608798 | orchestrator | 2026-01-10 14:38:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:20.655906 | orchestrator | 2026-01-10 14:38:20 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:20.657776 | orchestrator | 2026-01-10 14:38:20 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:20.659496 | orchestrator | 2026-01-10 14:38:20 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:20.659546 | orchestrator | 2026-01-10 14:38:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:23.703003 | orchestrator | 2026-01-10 14:38:23 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:23.705940 | orchestrator | 2026-01-10 14:38:23 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:23.708582 | orchestrator | 2026-01-10 14:38:23 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:23.708733 | orchestrator | 2026-01-10 14:38:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:26.769631 | orchestrator | 2026-01-10 14:38:26 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:26.772604 | orchestrator | 2026-01-10 14:38:26 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:26.775356 | orchestrator | 2026-01-10 14:38:26 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:26.775508 | orchestrator | 2026-01-10 14:38:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:29.821621 | orchestrator | 2026-01-10 14:38:29 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:29.823367 | orchestrator | 2026-01-10 14:38:29 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:29.825244 | orchestrator | 2026-01-10 14:38:29 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:29.825463 | orchestrator | 2026-01-10 14:38:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:32.879253 | orchestrator | 2026-01-10 14:38:32 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:32.879480 | orchestrator | 2026-01-10 14:38:32 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:32.880553 | orchestrator | 2026-01-10 14:38:32 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:32.880620 | orchestrator | 2026-01-10 14:38:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:35.929017 | orchestrator | 2026-01-10 14:38:35 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:35.930361 | orchestrator | 2026-01-10 14:38:35 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:35.931997 | orchestrator | 2026-01-10 14:38:35 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:35.932028 | orchestrator | 2026-01-10 14:38:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:38.975794 | orchestrator | 2026-01-10 14:38:38 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:38.977626 | orchestrator | 2026-01-10 14:38:38 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:38.980863 | orchestrator | 2026-01-10 14:38:38 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:38.980980 | orchestrator | 2026-01-10 14:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:42.035187 | orchestrator | 2026-01-10 14:38:42 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:42.035913 | orchestrator | 2026-01-10 14:38:42 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state STARTED 2026-01-10 14:38:42.036656 | orchestrator | 2026-01-10 14:38:42 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:42.036784 | orchestrator | 2026-01-10 14:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:45.077387 | orchestrator | 2026-01-10 14:38:45 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:45.089783 | orchestrator | 2026-01-10 14:38:45 | INFO  | Task 96eaaccc-3847-4974-abe3-5fea076e18db is in state SUCCESS 2026-01-10 14:38:45.091876 | orchestrator | 2026-01-10 14:38:45.091940 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:38:45.091948 | orchestrator | 2.16.14 2026-01-10 14:38:45.091954 | orchestrator | 2026-01-10 14:38:45.091959 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-10 14:38:45.091964 | orchestrator | 2026-01-10 14:38:45.091968 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:38:45.091972 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.882) 0:00:00.882 ****** 2026-01-10 14:38:45.091978 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.091983 | orchestrator | 2026-01-10 14:38:45.091987 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:38:45.091991 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:01.176) 0:00:02.058 ****** 2026-01-10 14:38:45.091995 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.091999 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092003 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092006 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092010 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092014 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092018 | orchestrator | 2026-01-10 14:38:45.092022 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:38:45.092025 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:01.761) 0:00:03.820 ****** 2026-01-10 14:38:45.092029 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092033 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092037 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092040 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092045 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092049 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092052 | orchestrator | 2026-01-10 14:38:45.092056 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:38:45.092060 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:00.747) 0:00:04.567 ****** 2026-01-10 14:38:45.092064 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092068 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092071 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092075 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092079 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092083 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092086 | orchestrator | 2026-01-10 14:38:45.092090 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:38:45.092094 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.988) 0:00:05.555 ****** 2026-01-10 14:38:45.092098 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092101 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092105 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092161 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092168 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092260 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092269 | orchestrator | 2026-01-10 14:38:45.092288 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:38:45.092296 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.617) 0:00:06.173 ****** 2026-01-10 14:38:45.092302 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092308 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092314 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092320 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092325 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092331 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092338 | orchestrator | 2026-01-10 14:38:45.092342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:38:45.092346 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.558) 0:00:06.731 ****** 2026-01-10 14:38:45.092350 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092354 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092357 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092361 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092364 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092368 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092372 | orchestrator | 2026-01-10 14:38:45.092376 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:38:45.092379 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.931) 0:00:07.662 ****** 2026-01-10 14:38:45.092383 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092388 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092391 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092395 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092399 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092403 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092406 | orchestrator | 2026-01-10 14:38:45.092410 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:38:45.092414 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:01.040) 0:00:08.703 ****** 2026-01-10 14:38:45.092418 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092421 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092426 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092429 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092433 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092437 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092440 | orchestrator | 2026-01-10 14:38:45.092444 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:38:45.092448 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.883) 0:00:09.587 ****** 2026-01-10 14:38:45.092452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:38:45.092456 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.092461 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.092465 | orchestrator | 2026-01-10 14:38:45.092469 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:38:45.092473 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.698) 0:00:10.286 ****** 2026-01-10 14:38:45.092477 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092482 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092486 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092503 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092508 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092512 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092516 | orchestrator | 2026-01-10 14:38:45.092521 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:38:45.092525 | orchestrator | Saturday 10 January 2026 14:27:10 +0000 (0:00:01.462) 0:00:11.748 ****** 2026-01-10 14:38:45.092529 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:38:45.092541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.092545 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.092550 | orchestrator | 2026-01-10 14:38:45.092554 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:38:45.092559 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:03.291) 0:00:15.039 ****** 2026-01-10 14:38:45.092563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:38:45.092568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:38:45.092572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:38:45.092577 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092581 | orchestrator | 2026-01-10 14:38:45.092585 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:38:45.092589 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:01.667) 0:00:16.707 ****** 2026-01-10 14:38:45.092594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092609 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092613 | orchestrator | 2026-01-10 14:38:45.092617 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:38:45.092623 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.726) 0:00:17.433 ****** 2026-01-10 14:38:45.092629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092644 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092648 | orchestrator | 2026-01-10 14:38:45.092651 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:38:45.092655 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.516) 0:00:17.949 ****** 2026-01-10 14:38:45.092684 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:27:11.373253', 'end': '2026-01-10 14:27:11.673731', 'delta': '0:00:00.300478', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:27:12.325799', 'end': '2026-01-10 14:27:12.600554', 'delta': '0:00:00.274755', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:27:13.228340', 'end': '2026-01-10 14:27:13.502261', 'delta': '0:00:00.273921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.092703 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092707 | orchestrator | 2026-01-10 14:38:45.092711 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:38:45.092715 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.174) 0:00:18.124 ****** 2026-01-10 14:38:45.092719 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.092722 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.092726 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.092730 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.092734 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.092741 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.092744 | orchestrator | 2026-01-10 14:38:45.092748 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:38:45.092752 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:02.293) 0:00:20.418 ****** 2026-01-10 14:38:45.092756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.092760 | orchestrator | 2026-01-10 14:38:45.092763 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:38:45.092767 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.832) 0:00:21.250 ****** 2026-01-10 14:38:45.092771 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092775 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092778 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092783 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092789 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092795 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092800 | orchestrator | 2026-01-10 14:38:45.092806 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:38:45.092811 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:01.666) 0:00:22.916 ****** 2026-01-10 14:38:45.092822 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092828 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092832 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092836 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092839 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092843 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092847 | orchestrator | 2026-01-10 14:38:45.092851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:38:45.092855 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:01.080) 0:00:23.997 ****** 2026-01-10 14:38:45.092858 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092862 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092866 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092869 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092873 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092877 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092880 | orchestrator | 2026-01-10 14:38:45.092884 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:38:45.092888 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.715) 0:00:24.712 ****** 2026-01-10 14:38:45.092892 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092895 | orchestrator | 2026-01-10 14:38:45.092899 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:38:45.092903 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.091) 0:00:24.804 ****** 2026-01-10 14:38:45.092907 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092910 | orchestrator | 2026-01-10 14:38:45.092914 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:38:45.092918 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.195) 0:00:24.999 ****** 2026-01-10 14:38:45.092922 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092925 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092929 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092936 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092939 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092943 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092947 | orchestrator | 2026-01-10 14:38:45.092951 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:38:45.092954 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.623) 0:00:25.622 ****** 2026-01-10 14:38:45.092958 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.092962 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.092966 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.092969 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.092973 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.092977 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.092980 | orchestrator | 2026-01-10 14:38:45.092984 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:38:45.092988 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:01.212) 0:00:26.835 ****** 2026-01-10 14:38:45.092992 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.093038 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.093042 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.093045 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.093049 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.093053 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.093057 | orchestrator | 2026-01-10 14:38:45.093060 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:38:45.093064 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.731) 0:00:27.566 ****** 2026-01-10 14:38:45.093068 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.093072 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.093075 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.093083 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.093086 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.093090 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.093094 | orchestrator | 2026-01-10 14:38:45.093097 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:38:45.093101 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:00.933) 0:00:28.500 ****** 2026-01-10 14:38:45.093105 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.093108 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.093112 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.093116 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.093119 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.093123 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.093127 | orchestrator | 2026-01-10 14:38:45.093130 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:38:45.093134 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:01.439) 0:00:29.940 ****** 2026-01-10 14:38:45.093138 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.093142 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.093145 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.093149 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.093156 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.093161 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.093167 | orchestrator | 2026-01-10 14:38:45.093172 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:38:45.093178 | orchestrator | Saturday 10 January 2026 14:27:29 +0000 (0:00:00.888) 0:00:30.828 ****** 2026-01-10 14:38:45.093184 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.093191 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.093197 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.093203 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.093208 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.093212 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.093216 | orchestrator | 2026-01-10 14:38:45.093220 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:38:45.093223 | orchestrator | Saturday 10 January 2026 14:27:30 +0000 (0:00:00.637) 0:00:31.466 ****** 2026-01-10 14:38:45.093228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e', 'dm-uuid-LVM-XVOmmU8gw9B369gyxlceU1KPl5227E4OnHZY8euzWOfpRxy0f0KzZTzTJfXguFbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e', 'dm-uuid-LVM-pqJmM8ieqWZ6BdY530dv83iHOMYrza8a16k50Rvgm1IhOwTHfYLJwUFE1CPFcmjp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad', 'dm-uuid-LVM-qB82mU0uRSY6RHhcksnqy9N8MyTE4sXUt2kgIbhVbEctaWIrtigAJOrMzz6Tn28Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be', 'dm-uuid-LVM-rs3eayTU9p4tHP9XXl5XuCOOAEpJ4KkoVh8Fw56E0fiOqhRkxS8Qh0ZKStJpEIbA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A2f7Z4-KCNH-W5Ce-ou5s-feTB-RgoC-qTsSaF', 'scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe', 'scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DwDCtJ-LyMf-XzrY-Eff3-Djlk-vdWz-pf7GZs', 'scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341', 'scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a', 'scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.093385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xIeQyy-K7KP-9fGF-640Y-OGvx-NBxv-nPopt0', 'scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e', 'scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.093394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fqjcD3-KDxr-vbAM-6N0Q-cc7U-P1SB-bgaSVv', 'scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2', 'scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c', 'scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f', 'dm-uuid-LVM-7cucNAsaiAAotIpLmbIAdQU43KNMMITqiNtsEoSbWPIVcHr7jJK8P2eJW6H4b5ym'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e', 'dm-uuid-LVM-XVGDo2c7ar3U5yej56EfTBud9IPUfNDALDqim7D21QW70LyT1U2UjGoboBptL3og'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094321 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.094326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yk7Nti-nWl1-IsZK-CxIA-L5NY-lYh9-PSyeZY', 'scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc', 'scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Orc67S-vipX-AVMa-hkR8-UvV0-2ko5-K0ZhW3', 'scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076', 'scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6', 'scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094415 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.094419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094457 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.094464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part1', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part14', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part15', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part16', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094538 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.094548 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.094554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:38:45.094613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:38:45.094636 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.094643 | orchestrator | 2026-01-10 14:38:45.094650 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:38:45.094679 | orchestrator | Saturday 10 January 2026 14:27:31 +0000 (0:00:01.466) 0:00:32.933 ****** 2026-01-10 14:38:45.094687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e', 'dm-uuid-LVM-XVOmmU8gw9B369gyxlceU1KPl5227E4OnHZY8euzWOfpRxy0f0KzZTzTJfXguFbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e', 'dm-uuid-LVM-pqJmM8ieqWZ6BdY530dv83iHOMYrza8a16k50Rvgm1IhOwTHfYLJwUFE1CPFcmjp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad', 'dm-uuid-LVM-qB82mU0uRSY6RHhcksnqy9N8MyTE4sXUt2kgIbhVbEctaWIrtigAJOrMzz6Tn28Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be', 'dm-uuid-LVM-rs3eayTU9p4tHP9XXl5XuCOOAEpJ4KkoVh8Fw56E0fiOqhRkxS8Qh0ZKStJpEIbA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094846 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A2f7Z4-KCNH-W5Ce-ou5s-feTB-RgoC-qTsSaF', 'scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe', 'scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DwDCtJ-LyMf-XzrY-Eff3-Djlk-vdWz-pf7GZs', 'scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341', 'scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094885 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a', 'scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094967 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.094971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f', 'dm-uuid-LVM-7cucNAsaiAAotIpLmbIAdQU43KNMMITqiNtsEoSbWPIVcHr7jJK8P2eJW6H4b5ym'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e', 'dm-uuid-LVM-XVGDo2c7ar3U5yej56EfTBud9IPUfNDALDqim7D21QW70LyT1U2UjGoboBptL3og'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xIeQyy-K7KP-9fGF-640Y-OGvx-NBxv-nPopt0', 'scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e', 'scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.094996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fqjcD3-KDxr-vbAM-6N0Q-cc7U-P1SB-bgaSVv', 'scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2', 'scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095026 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c', 'scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095030 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095070 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095094 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part1', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part14', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part15', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part16', 'scsi-SQEMU_QEMU_HARDDISK_20f34273-2e89-4d41-972e-9d1b835af58f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095269 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.095277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yk7Nti-nWl1-IsZK-CxIA-L5NY-lYh9-PSyeZY', 'scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc', 'scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Orc67S-vipX-AVMa-hkR8-UvV0-2ko5-K0ZhW3', 'scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076', 'scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095304 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095312 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095319 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6', 'scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095337 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095341 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095345 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095352 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095362 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f070c9e-b618-4a7f-a2a9-f2c88abe8fb4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095367 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095376 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.095380 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.095384 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.095388 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095392 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095398 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095404 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095408 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095412 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095460 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095464 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095495 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce9993ed-1047-42a3-ac7e-aedc9bfe346e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095501 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:38:45.095511 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.095515 | orchestrator | 2026-01-10 14:38:45.095519 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:38:45.095523 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:01.384) 0:00:34.317 ****** 2026-01-10 14:38:45.095527 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.095532 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.095536 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.095539 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.095543 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.095547 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.095550 | orchestrator | 2026-01-10 14:38:45.095554 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:38:45.095559 | orchestrator | Saturday 10 January 2026 14:27:34 +0000 (0:00:01.499) 0:00:35.817 ****** 2026-01-10 14:38:45.095565 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.095571 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.095577 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.095582 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.095587 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.095592 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.095598 | orchestrator | 2026-01-10 14:38:45.095603 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:38:45.095609 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:00.604) 0:00:36.422 ****** 2026-01-10 14:38:45.095615 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.095620 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.095626 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.095631 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.095637 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.095643 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.095649 | orchestrator | 2026-01-10 14:38:45.095655 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:38:45.095678 | orchestrator | Saturday 10 January 2026 14:27:36 +0000 (0:00:00.862) 0:00:37.284 ****** 2026-01-10 14:38:45.095684 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.095689 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.095695 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.095701 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.095707 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.095713 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.095719 | orchestrator | 2026-01-10 14:38:45.095725 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:38:45.095731 | orchestrator | Saturday 10 January 2026 14:27:36 +0000 (0:00:00.724) 0:00:38.009 ****** 2026-01-10 14:38:45.095737 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.095743 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.095748 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.095754 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.095760 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.095764 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.095768 | orchestrator | 2026-01-10 14:38:45.095782 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:38:45.095786 | orchestrator | Saturday 10 January 2026 14:27:37 +0000 (0:00:00.831) 0:00:38.840 ****** 2026-01-10 14:38:45.095796 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.095800 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.095804 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.095807 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.095817 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.095821 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.095824 | orchestrator | 2026-01-10 14:38:45.095828 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:38:45.095832 | orchestrator | Saturday 10 January 2026 14:27:38 +0000 (0:00:00.947) 0:00:39.788 ****** 2026-01-10 14:38:45.095837 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:38:45.095843 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-10 14:38:45.095849 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-10 14:38:45.095854 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-10 14:38:45.095864 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-10 14:38:45.095872 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:38:45.095877 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:38:45.095883 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-10 14:38:45.095888 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:38:45.095894 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-10 14:38:45.095900 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:38:45.095906 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:38:45.095912 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:38:45.095917 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:38:45.095923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:38:45.095930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:38:45.095936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:38:45.095941 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:38:45.095947 | orchestrator | 2026-01-10 14:38:45.095953 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:38:45.095960 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:03.331) 0:00:43.119 ****** 2026-01-10 14:38:45.095966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:38:45.095973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:38:45.095980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:38:45.095986 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.095992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:38:45.095996 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:38:45.096000 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:38:45.096006 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096012 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:38:45.096018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:38:45.096025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:38:45.096032 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:38:45.096045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:38:45.096051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:38:45.096058 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:38:45.096071 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:38:45.096077 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:38:45.096083 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096089 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:38:45.096096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:38:45.096110 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:38:45.096116 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096120 | orchestrator | 2026-01-10 14:38:45.096124 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:38:45.096129 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:01.050) 0:00:44.170 ****** 2026-01-10 14:38:45.096133 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096137 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096142 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096147 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.096152 | orchestrator | 2026-01-10 14:38:45.096157 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:38:45.096161 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:00.832) 0:00:45.002 ****** 2026-01-10 14:38:45.096166 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096170 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096174 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096179 | orchestrator | 2026-01-10 14:38:45.096184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:38:45.096188 | orchestrator | Saturday 10 January 2026 14:27:44 +0000 (0:00:00.277) 0:00:45.280 ****** 2026-01-10 14:38:45.096193 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096197 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096202 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096206 | orchestrator | 2026-01-10 14:38:45.096217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:38:45.096227 | orchestrator | Saturday 10 January 2026 14:27:44 +0000 (0:00:00.313) 0:00:45.594 ****** 2026-01-10 14:38:45.096232 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096236 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096241 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096245 | orchestrator | 2026-01-10 14:38:45.096250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:38:45.096254 | orchestrator | Saturday 10 January 2026 14:27:44 +0000 (0:00:00.548) 0:00:46.143 ****** 2026-01-10 14:38:45.096258 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096263 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096270 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096276 | orchestrator | 2026-01-10 14:38:45.096281 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:38:45.096288 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.439) 0:00:46.583 ****** 2026-01-10 14:38:45.096294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.096300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.096306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.096312 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096317 | orchestrator | 2026-01-10 14:38:45.096323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:38:45.096329 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.449) 0:00:47.032 ****** 2026-01-10 14:38:45.096333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.096337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.096341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.096345 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096348 | orchestrator | 2026-01-10 14:38:45.096352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:38:45.096356 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.340) 0:00:47.373 ****** 2026-01-10 14:38:45.096365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.096369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.096373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.096376 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096380 | orchestrator | 2026-01-10 14:38:45.096384 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:38:45.096388 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.372) 0:00:47.746 ****** 2026-01-10 14:38:45.096391 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096395 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096399 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096403 | orchestrator | 2026-01-10 14:38:45.096406 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:38:45.096410 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.390) 0:00:48.136 ****** 2026-01-10 14:38:45.096414 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:38:45.096418 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:38:45.096422 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:38:45.096426 | orchestrator | 2026-01-10 14:38:45.096429 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:38:45.096433 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.830) 0:00:48.966 ****** 2026-01-10 14:38:45.096437 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:38:45.096441 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.096445 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.096449 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:38:45.096453 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:38:45.096456 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:38:45.096460 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:38:45.096464 | orchestrator | 2026-01-10 14:38:45.096468 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:38:45.096471 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.669) 0:00:49.635 ****** 2026-01-10 14:38:45.096475 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:38:45.096479 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.096483 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.096486 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:38:45.096490 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:38:45.096494 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:38:45.096497 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:38:45.096528 | orchestrator | 2026-01-10 14:38:45.096533 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.096536 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:01.720) 0:00:51.356 ****** 2026-01-10 14:38:45.096545 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.096550 | orchestrator | 2026-01-10 14:38:45.096554 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.096562 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:01.338) 0:00:52.694 ****** 2026-01-10 14:38:45.096566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.096574 | orchestrator | 2026-01-10 14:38:45.096578 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.096582 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:01.739) 0:00:54.433 ****** 2026-01-10 14:38:45.096586 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096594 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096598 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.096602 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.096606 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.096610 | orchestrator | 2026-01-10 14:38:45.096614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.096618 | orchestrator | Saturday 10 January 2026 14:27:54 +0000 (0:00:01.678) 0:00:56.112 ****** 2026-01-10 14:38:45.096621 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096625 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096629 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096633 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096637 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096641 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096645 | orchestrator | 2026-01-10 14:38:45.096649 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.096653 | orchestrator | Saturday 10 January 2026 14:27:56 +0000 (0:00:01.225) 0:00:57.337 ****** 2026-01-10 14:38:45.096721 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096727 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096731 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096735 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096739 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096742 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096746 | orchestrator | 2026-01-10 14:38:45.096750 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.096754 | orchestrator | Saturday 10 January 2026 14:27:57 +0000 (0:00:01.119) 0:00:58.457 ****** 2026-01-10 14:38:45.096757 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096761 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096765 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096769 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096775 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096782 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096787 | orchestrator | 2026-01-10 14:38:45.096793 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.096798 | orchestrator | Saturday 10 January 2026 14:27:58 +0000 (0:00:01.037) 0:00:59.494 ****** 2026-01-10 14:38:45.096804 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096810 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096816 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096821 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.096828 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.096834 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.096840 | orchestrator | 2026-01-10 14:38:45.096846 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.096852 | orchestrator | Saturday 10 January 2026 14:28:00 +0000 (0:00:02.475) 0:01:01.970 ****** 2026-01-10 14:38:45.096858 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096864 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096871 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096876 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096880 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096884 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096888 | orchestrator | 2026-01-10 14:38:45.096892 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.096903 | orchestrator | Saturday 10 January 2026 14:28:02 +0000 (0:00:01.801) 0:01:03.771 ****** 2026-01-10 14:38:45.096907 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.096911 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.096914 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.096918 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.096922 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.096925 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.096929 | orchestrator | 2026-01-10 14:38:45.096933 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.096937 | orchestrator | Saturday 10 January 2026 14:28:03 +0000 (0:00:01.281) 0:01:05.053 ****** 2026-01-10 14:38:45.096941 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096945 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096948 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096952 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.096956 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.096959 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.096963 | orchestrator | 2026-01-10 14:38:45.096967 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.096971 | orchestrator | Saturday 10 January 2026 14:28:05 +0000 (0:00:01.752) 0:01:06.805 ****** 2026-01-10 14:38:45.096974 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.096978 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.096982 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.096985 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.096989 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.096993 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.096997 | orchestrator | 2026-01-10 14:38:45.097001 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.097005 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:01.799) 0:01:08.605 ****** 2026-01-10 14:38:45.097009 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097012 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097016 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097020 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097023 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097035 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097039 | orchestrator | 2026-01-10 14:38:45.097043 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.097050 | orchestrator | Saturday 10 January 2026 14:28:08 +0000 (0:00:00.791) 0:01:09.396 ****** 2026-01-10 14:38:45.097054 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097058 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097062 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097066 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.097070 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.097074 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.097078 | orchestrator | 2026-01-10 14:38:45.097081 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.097085 | orchestrator | Saturday 10 January 2026 14:28:08 +0000 (0:00:00.782) 0:01:10.179 ****** 2026-01-10 14:38:45.097089 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097093 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097096 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097100 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097104 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097108 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097111 | orchestrator | 2026-01-10 14:38:45.097115 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.097119 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:00.855) 0:01:11.034 ****** 2026-01-10 14:38:45.097123 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097127 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097130 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097138 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097142 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097146 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097149 | orchestrator | 2026-01-10 14:38:45.097153 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.097159 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.747) 0:01:11.782 ****** 2026-01-10 14:38:45.097165 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097174 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097184 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097189 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097195 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097200 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097205 | orchestrator | 2026-01-10 14:38:45.097210 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.097215 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:01.110) 0:01:12.893 ****** 2026-01-10 14:38:45.097221 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097228 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097234 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097240 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097246 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097252 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097259 | orchestrator | 2026-01-10 14:38:45.097265 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.097271 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:01.298) 0:01:14.191 ****** 2026-01-10 14:38:45.097276 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097281 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097287 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097293 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097299 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097306 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097311 | orchestrator | 2026-01-10 14:38:45.097317 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.097322 | orchestrator | Saturday 10 January 2026 14:28:13 +0000 (0:00:01.040) 0:01:15.231 ****** 2026-01-10 14:38:45.097328 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097334 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097340 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097347 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.097352 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.097356 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.097360 | orchestrator | 2026-01-10 14:38:45.097364 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.097368 | orchestrator | Saturday 10 January 2026 14:28:15 +0000 (0:00:01.771) 0:01:17.002 ****** 2026-01-10 14:38:45.097372 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097375 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097379 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097383 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.097386 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.097390 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.097394 | orchestrator | 2026-01-10 14:38:45.097398 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.097401 | orchestrator | Saturday 10 January 2026 14:28:17 +0000 (0:00:01.318) 0:01:18.321 ****** 2026-01-10 14:38:45.097405 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097409 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097412 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097416 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.097420 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.097424 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.097427 | orchestrator | 2026-01-10 14:38:45.097431 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-10 14:38:45.097441 | orchestrator | Saturday 10 January 2026 14:28:18 +0000 (0:00:01.661) 0:01:19.982 ****** 2026-01-10 14:38:45.097445 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.097449 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.097453 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.097457 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.097460 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.097464 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.097468 | orchestrator | 2026-01-10 14:38:45.097471 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-10 14:38:45.097475 | orchestrator | Saturday 10 January 2026 14:28:21 +0000 (0:00:02.654) 0:01:22.637 ****** 2026-01-10 14:38:45.097479 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.097483 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.097486 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.097490 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.097494 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.097503 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.097507 | orchestrator | 2026-01-10 14:38:45.097516 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-10 14:38:45.097520 | orchestrator | Saturday 10 January 2026 14:28:23 +0000 (0:00:02.541) 0:01:25.179 ****** 2026-01-10 14:38:45.097525 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.097529 | orchestrator | 2026-01-10 14:38:45.097533 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-10 14:38:45.097537 | orchestrator | Saturday 10 January 2026 14:28:25 +0000 (0:00:01.428) 0:01:26.607 ****** 2026-01-10 14:38:45.097541 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097545 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097549 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097553 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097557 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097561 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097565 | orchestrator | 2026-01-10 14:38:45.097568 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-10 14:38:45.097572 | orchestrator | Saturday 10 January 2026 14:28:26 +0000 (0:00:00.969) 0:01:27.577 ****** 2026-01-10 14:38:45.097576 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097581 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097587 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097594 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097599 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097605 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097611 | orchestrator | 2026-01-10 14:38:45.097618 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-10 14:38:45.097624 | orchestrator | Saturday 10 January 2026 14:28:27 +0000 (0:00:00.950) 0:01:28.527 ****** 2026-01-10 14:38:45.097630 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097636 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097643 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097650 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097700 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097709 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:38:45.097718 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097723 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097734 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097738 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097742 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097746 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:38:45.097750 | orchestrator | 2026-01-10 14:38:45.097753 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-10 14:38:45.097758 | orchestrator | Saturday 10 January 2026 14:28:28 +0000 (0:00:01.284) 0:01:29.812 ****** 2026-01-10 14:38:45.097761 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.097766 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.097769 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.097773 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.097777 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.097781 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.097807 | orchestrator | 2026-01-10 14:38:45.097814 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-10 14:38:45.097818 | orchestrator | Saturday 10 January 2026 14:28:30 +0000 (0:00:01.774) 0:01:31.586 ****** 2026-01-10 14:38:45.097822 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097825 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097829 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097833 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097836 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097841 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097845 | orchestrator | 2026-01-10 14:38:45.097849 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-10 14:38:45.097853 | orchestrator | Saturday 10 January 2026 14:28:31 +0000 (0:00:01.015) 0:01:32.602 ****** 2026-01-10 14:38:45.097857 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097861 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097865 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097869 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097873 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097877 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097880 | orchestrator | 2026-01-10 14:38:45.097884 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-10 14:38:45.097888 | orchestrator | Saturday 10 January 2026 14:28:32 +0000 (0:00:00.684) 0:01:33.286 ****** 2026-01-10 14:38:45.097892 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.097896 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.097899 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.097903 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.097907 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.097911 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.097915 | orchestrator | 2026-01-10 14:38:45.097919 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-10 14:38:45.097929 | orchestrator | Saturday 10 January 2026 14:28:32 +0000 (0:00:00.599) 0:01:33.886 ****** 2026-01-10 14:38:45.097938 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.097943 | orchestrator | 2026-01-10 14:38:45.097946 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-10 14:38:45.097951 | orchestrator | Saturday 10 January 2026 14:28:33 +0000 (0:00:01.122) 0:01:35.009 ****** 2026-01-10 14:38:45.097954 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.097958 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.097962 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.097966 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.097977 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.097981 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.097985 | orchestrator | 2026-01-10 14:38:45.097989 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-10 14:38:45.097994 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:01:06.708) 0:02:41.718 ****** 2026-01-10 14:38:45.097998 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098002 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098006 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098009 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098072 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098078 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098082 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098086 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098090 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098094 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098098 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098102 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098105 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098109 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098114 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098117 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098122 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098125 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098129 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098133 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:38:45.098137 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098141 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:38:45.098145 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:38:45.098149 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098152 | orchestrator | 2026-01-10 14:38:45.098156 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-10 14:38:45.098160 | orchestrator | Saturday 10 January 2026 14:29:41 +0000 (0:00:01.064) 0:02:42.782 ****** 2026-01-10 14:38:45.098164 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098168 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098171 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098175 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098179 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098183 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098187 | orchestrator | 2026-01-10 14:38:45.098191 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-10 14:38:45.098195 | orchestrator | Saturday 10 January 2026 14:29:42 +0000 (0:00:01.028) 0:02:43.811 ****** 2026-01-10 14:38:45.098199 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098202 | orchestrator | 2026-01-10 14:38:45.098206 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-10 14:38:45.098211 | orchestrator | Saturday 10 January 2026 14:29:42 +0000 (0:00:00.204) 0:02:44.016 ****** 2026-01-10 14:38:45.098215 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098225 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098229 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098233 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098237 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098240 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098244 | orchestrator | 2026-01-10 14:38:45.098248 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-10 14:38:45.098252 | orchestrator | Saturday 10 January 2026 14:29:43 +0000 (0:00:00.788) 0:02:44.804 ****** 2026-01-10 14:38:45.098258 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098264 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098272 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098280 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098287 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098292 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098298 | orchestrator | 2026-01-10 14:38:45.098305 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-10 14:38:45.098310 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:01.197) 0:02:46.001 ****** 2026-01-10 14:38:45.098316 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098322 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098328 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098341 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098347 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098359 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098365 | orchestrator | 2026-01-10 14:38:45.098372 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-10 14:38:45.098378 | orchestrator | Saturday 10 January 2026 14:29:45 +0000 (0:00:00.800) 0:02:46.802 ****** 2026-01-10 14:38:45.098384 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.098390 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.098396 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.098403 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.098409 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.098414 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.098420 | orchestrator | 2026-01-10 14:38:45.098451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-10 14:38:45.098456 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:02.836) 0:02:49.638 ****** 2026-01-10 14:38:45.098461 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.098465 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.098468 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.098472 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.098476 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.098480 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.098483 | orchestrator | 2026-01-10 14:38:45.098487 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-10 14:38:45.098491 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:00.601) 0:02:50.240 ****** 2026-01-10 14:38:45.098495 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.098500 | orchestrator | 2026-01-10 14:38:45.098504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-10 14:38:45.098508 | orchestrator | Saturday 10 January 2026 14:29:50 +0000 (0:00:01.286) 0:02:51.526 ****** 2026-01-10 14:38:45.098512 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098515 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098519 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098523 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098527 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098531 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098535 | orchestrator | 2026-01-10 14:38:45.098539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-10 14:38:45.098550 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:00.856) 0:02:52.383 ****** 2026-01-10 14:38:45.098555 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098559 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098563 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098567 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098575 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098578 | orchestrator | 2026-01-10 14:38:45.098582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-10 14:38:45.098586 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:00.559) 0:02:52.942 ****** 2026-01-10 14:38:45.098590 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098594 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098597 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098602 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098605 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098609 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098613 | orchestrator | 2026-01-10 14:38:45.098617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-10 14:38:45.098621 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:01.176) 0:02:54.118 ****** 2026-01-10 14:38:45.098625 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098629 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098633 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098637 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098641 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098645 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098648 | orchestrator | 2026-01-10 14:38:45.098652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-10 14:38:45.098672 | orchestrator | Saturday 10 January 2026 14:29:53 +0000 (0:00:00.673) 0:02:54.791 ****** 2026-01-10 14:38:45.098679 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098685 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098689 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098693 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098696 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098700 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098704 | orchestrator | 2026-01-10 14:38:45.098708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-10 14:38:45.098712 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:00.687) 0:02:55.479 ****** 2026-01-10 14:38:45.098716 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098720 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098724 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098728 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098731 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098735 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098739 | orchestrator | 2026-01-10 14:38:45.098742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-10 14:38:45.098746 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:00.642) 0:02:56.122 ****** 2026-01-10 14:38:45.098750 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098754 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098758 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098762 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098766 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098769 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098773 | orchestrator | 2026-01-10 14:38:45.098777 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-10 14:38:45.098780 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:00.912) 0:02:57.034 ****** 2026-01-10 14:38:45.098784 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.098808 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.098813 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.098821 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.098825 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.098828 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.098833 | orchestrator | 2026-01-10 14:38:45.098836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-10 14:38:45.098840 | orchestrator | Saturday 10 January 2026 14:29:56 +0000 (0:00:00.986) 0:02:58.021 ****** 2026-01-10 14:38:45.098844 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.098848 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.098852 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.098856 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.098859 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.098863 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.098867 | orchestrator | 2026-01-10 14:38:45.098871 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-10 14:38:45.098875 | orchestrator | Saturday 10 January 2026 14:29:57 +0000 (0:00:01.147) 0:02:59.168 ****** 2026-01-10 14:38:45.098880 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.098884 | orchestrator | 2026-01-10 14:38:45.098888 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-10 14:38:45.098892 | orchestrator | Saturday 10 January 2026 14:29:58 +0000 (0:00:01.049) 0:03:00.218 ****** 2026-01-10 14:38:45.098896 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-10 14:38:45.098900 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098904 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-10 14:38:45.098908 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-10 14:38:45.098912 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-10 14:38:45.098916 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-10 14:38:45.098920 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098923 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098927 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098931 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-10 14:38:45.098935 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098943 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098946 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-10 14:38:45.098950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098954 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.098958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098961 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.098969 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.098973 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.098977 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-10 14:38:45.098980 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.098984 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.098988 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.098992 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.098996 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-10 14:38:45.099007 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.099010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.099014 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099018 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-10 14:38:45.099021 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099025 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099033 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099037 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099040 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099044 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-10 14:38:45.099048 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099052 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099056 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099060 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099064 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099070 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-10 14:38:45.099076 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099084 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099112 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099119 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099124 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:38:45.099129 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099134 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099147 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099152 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:38:45.099158 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099163 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099169 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099181 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099186 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099191 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:38:45.099203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099214 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099220 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099232 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099238 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:38:45.099243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099248 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099254 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099260 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099266 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:38:45.099278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099283 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099288 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-10 14:38:45.099294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099300 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:38:45.099305 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099317 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-10 14:38:45.099322 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099327 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-10 14:38:45.099333 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-10 14:38:45.099340 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-10 14:38:45.099346 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-10 14:38:45.099351 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-10 14:38:45.099357 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-10 14:38:45.099362 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-10 14:38:45.099368 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-10 14:38:45.099374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:38:45.099380 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-10 14:38:45.099386 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-10 14:38:45.099391 | orchestrator | 2026-01-10 14:38:45.099396 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-10 14:38:45.099401 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:07.585) 0:03:07.803 ****** 2026-01-10 14:38:45.099406 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099412 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099417 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099424 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.099431 | orchestrator | 2026-01-10 14:38:45.099436 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-10 14:38:45.099442 | orchestrator | Saturday 10 January 2026 14:30:07 +0000 (0:00:01.370) 0:03:09.173 ****** 2026-01-10 14:38:45.099499 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099510 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099514 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099525 | orchestrator | 2026-01-10 14:38:45.099529 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-10 14:38:45.099533 | orchestrator | Saturday 10 January 2026 14:30:08 +0000 (0:00:01.025) 0:03:10.199 ****** 2026-01-10 14:38:45.099537 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099541 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099545 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.099549 | orchestrator | 2026-01-10 14:38:45.099553 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-10 14:38:45.099557 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:01.350) 0:03:11.550 ****** 2026-01-10 14:38:45.099561 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.099565 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.099569 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.099572 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099576 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099580 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099584 | orchestrator | 2026-01-10 14:38:45.099587 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-10 14:38:45.099591 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:00.556) 0:03:12.107 ****** 2026-01-10 14:38:45.099595 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.099599 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.099603 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.099607 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099615 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099619 | orchestrator | 2026-01-10 14:38:45.099622 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-10 14:38:45.099626 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:00.767) 0:03:12.874 ****** 2026-01-10 14:38:45.099630 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099634 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099638 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099641 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099645 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099649 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099653 | orchestrator | 2026-01-10 14:38:45.099700 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-10 14:38:45.099708 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:00.602) 0:03:13.477 ****** 2026-01-10 14:38:45.099714 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099719 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099725 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099731 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099737 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099743 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099749 | orchestrator | 2026-01-10 14:38:45.099756 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-10 14:38:45.099762 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:00.726) 0:03:14.203 ****** 2026-01-10 14:38:45.099768 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099776 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099779 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099783 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099787 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099790 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099801 | orchestrator | 2026-01-10 14:38:45.099805 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-10 14:38:45.099810 | orchestrator | Saturday 10 January 2026 14:30:13 +0000 (0:00:00.555) 0:03:14.758 ****** 2026-01-10 14:38:45.099813 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099817 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099821 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099824 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099828 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099832 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099835 | orchestrator | 2026-01-10 14:38:45.099840 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-10 14:38:45.099844 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.719) 0:03:15.477 ****** 2026-01-10 14:38:45.099847 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099851 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099855 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099859 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099862 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099866 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099870 | orchestrator | 2026-01-10 14:38:45.099874 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-10 14:38:45.099877 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.578) 0:03:16.056 ****** 2026-01-10 14:38:45.099881 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.099885 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.099889 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.099892 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099903 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099912 | orchestrator | 2026-01-10 14:38:45.099922 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-10 14:38:45.099929 | orchestrator | Saturday 10 January 2026 14:30:15 +0000 (0:00:00.898) 0:03:16.954 ****** 2026-01-10 14:38:45.099935 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099942 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099948 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.099955 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.099959 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.099963 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.099967 | orchestrator | 2026-01-10 14:38:45.099970 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-10 14:38:45.099974 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:03.168) 0:03:20.123 ****** 2026-01-10 14:38:45.099978 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.099982 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.099985 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.099989 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.099993 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.099997 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100000 | orchestrator | 2026-01-10 14:38:45.100004 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-10 14:38:45.100008 | orchestrator | Saturday 10 January 2026 14:30:19 +0000 (0:00:00.702) 0:03:20.825 ****** 2026-01-10 14:38:45.100012 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.100015 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.100021 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.100027 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100034 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100039 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100046 | orchestrator | 2026-01-10 14:38:45.100053 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-10 14:38:45.100062 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:00.601) 0:03:21.426 ****** 2026-01-10 14:38:45.100066 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100070 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100073 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100077 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100081 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100084 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100088 | orchestrator | 2026-01-10 14:38:45.100092 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-10 14:38:45.100095 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:00.692) 0:03:22.119 ****** 2026-01-10 14:38:45.100100 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.100105 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.100111 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.100119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100128 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100135 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100141 | orchestrator | 2026-01-10 14:38:45.100146 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-10 14:38:45.100152 | orchestrator | Saturday 10 January 2026 14:30:21 +0000 (0:00:00.550) 0:03:22.670 ****** 2026-01-10 14:38:45.100160 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-10 14:38:45.100170 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-10 14:38:45.100177 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100183 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-10 14:38:45.100189 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-10 14:38:45.100194 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-10 14:38:45.100212 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-10 14:38:45.100218 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100223 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100230 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100235 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100252 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100258 | orchestrator | 2026-01-10 14:38:45.100264 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-10 14:38:45.100271 | orchestrator | Saturday 10 January 2026 14:30:22 +0000 (0:00:00.727) 0:03:23.397 ****** 2026-01-10 14:38:45.100275 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100279 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100283 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100286 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100290 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100294 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100297 | orchestrator | 2026-01-10 14:38:45.100301 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-10 14:38:45.100305 | orchestrator | Saturday 10 January 2026 14:30:22 +0000 (0:00:00.562) 0:03:23.960 ****** 2026-01-10 14:38:45.100309 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100312 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100316 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100322 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100328 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100333 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100339 | orchestrator | 2026-01-10 14:38:45.100346 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:38:45.100351 | orchestrator | Saturday 10 January 2026 14:30:23 +0000 (0:00:00.753) 0:03:24.713 ****** 2026-01-10 14:38:45.100357 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100364 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100369 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100376 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100382 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100388 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100394 | orchestrator | 2026-01-10 14:38:45.100400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:38:45.100404 | orchestrator | Saturday 10 January 2026 14:30:24 +0000 (0:00:00.659) 0:03:25.373 ****** 2026-01-10 14:38:45.100408 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100412 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100415 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100419 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100422 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100426 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100430 | orchestrator | 2026-01-10 14:38:45.100434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:38:45.100438 | orchestrator | Saturday 10 January 2026 14:30:25 +0000 (0:00:00.951) 0:03:26.324 ****** 2026-01-10 14:38:45.100441 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100445 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100449 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100453 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100456 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100460 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100464 | orchestrator | 2026-01-10 14:38:45.100467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:38:45.100471 | orchestrator | Saturday 10 January 2026 14:30:25 +0000 (0:00:00.682) 0:03:27.007 ****** 2026-01-10 14:38:45.100475 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.100479 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.100482 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.100486 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100490 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100493 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100497 | orchestrator | 2026-01-10 14:38:45.100501 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:38:45.100509 | orchestrator | Saturday 10 January 2026 14:30:26 +0000 (0:00:01.037) 0:03:28.045 ****** 2026-01-10 14:38:45.100513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.100516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.100520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.100524 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100527 | orchestrator | 2026-01-10 14:38:45.100531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:38:45.100535 | orchestrator | Saturday 10 January 2026 14:30:27 +0000 (0:00:00.476) 0:03:28.521 ****** 2026-01-10 14:38:45.100539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.100543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.100546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.100550 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100554 | orchestrator | 2026-01-10 14:38:45.100558 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:38:45.100561 | orchestrator | Saturday 10 January 2026 14:30:27 +0000 (0:00:00.481) 0:03:29.003 ****** 2026-01-10 14:38:45.100565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.100569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.100572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.100576 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100580 | orchestrator | 2026-01-10 14:38:45.100588 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:38:45.100596 | orchestrator | Saturday 10 January 2026 14:30:28 +0000 (0:00:00.456) 0:03:29.459 ****** 2026-01-10 14:38:45.100600 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.100604 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.100607 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.100611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100615 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100618 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100622 | orchestrator | 2026-01-10 14:38:45.100626 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:38:45.100630 | orchestrator | Saturday 10 January 2026 14:30:28 +0000 (0:00:00.752) 0:03:30.212 ****** 2026-01-10 14:38:45.100634 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:38:45.100637 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:38:45.100641 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:38:45.100645 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-10 14:38:45.100649 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100652 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-10 14:38:45.100673 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100679 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-10 14:38:45.100683 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100686 | orchestrator | 2026-01-10 14:38:45.100690 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-10 14:38:45.100694 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:01.694) 0:03:31.906 ****** 2026-01-10 14:38:45.100698 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.100702 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.100705 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.100709 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.100713 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.100717 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.100721 | orchestrator | 2026-01-10 14:38:45.100725 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.100729 | orchestrator | Saturday 10 January 2026 14:30:33 +0000 (0:00:02.572) 0:03:34.479 ****** 2026-01-10 14:38:45.100736 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.100740 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.100743 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.100747 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.100751 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.100755 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.100758 | orchestrator | 2026-01-10 14:38:45.100762 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:38:45.100766 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:01.031) 0:03:35.511 ****** 2026-01-10 14:38:45.100770 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100773 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100777 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.100785 | orchestrator | 2026-01-10 14:38:45.100789 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:38:45.100793 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:01.495) 0:03:37.006 ****** 2026-01-10 14:38:45.100796 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.100800 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.100803 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.100807 | orchestrator | 2026-01-10 14:38:45.100811 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:38:45.100815 | orchestrator | Saturday 10 January 2026 14:30:36 +0000 (0:00:00.334) 0:03:37.341 ****** 2026-01-10 14:38:45.100818 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.100822 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.100826 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.100830 | orchestrator | 2026-01-10 14:38:45.100834 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:38:45.100837 | orchestrator | Saturday 10 January 2026 14:30:37 +0000 (0:00:01.633) 0:03:38.974 ****** 2026-01-10 14:38:45.100841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:38:45.100845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:38:45.100848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:38:45.100852 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100856 | orchestrator | 2026-01-10 14:38:45.100859 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:38:45.100863 | orchestrator | Saturday 10 January 2026 14:30:38 +0000 (0:00:00.615) 0:03:39.590 ****** 2026-01-10 14:38:45.100867 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.100870 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.100874 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.100878 | orchestrator | 2026-01-10 14:38:45.100882 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:38:45.100885 | orchestrator | Saturday 10 January 2026 14:30:38 +0000 (0:00:00.404) 0:03:39.994 ****** 2026-01-10 14:38:45.100889 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.100893 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.100897 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.100901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.100904 | orchestrator | 2026-01-10 14:38:45.100908 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:38:45.100912 | orchestrator | Saturday 10 January 2026 14:30:40 +0000 (0:00:01.357) 0:03:41.351 ****** 2026-01-10 14:38:45.100916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.100920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.100923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.100931 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100935 | orchestrator | 2026-01-10 14:38:45.100943 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:38:45.100946 | orchestrator | Saturday 10 January 2026 14:30:40 +0000 (0:00:00.408) 0:03:41.760 ****** 2026-01-10 14:38:45.100954 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100957 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100961 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.100965 | orchestrator | 2026-01-10 14:38:45.100968 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:38:45.100972 | orchestrator | Saturday 10 January 2026 14:30:40 +0000 (0:00:00.329) 0:03:42.089 ****** 2026-01-10 14:38:45.100976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100980 | orchestrator | 2026-01-10 14:38:45.100983 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:38:45.100987 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.238) 0:03:42.328 ****** 2026-01-10 14:38:45.100991 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.100995 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.100998 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.101002 | orchestrator | 2026-01-10 14:38:45.101005 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:38:45.101009 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.359) 0:03:42.687 ****** 2026-01-10 14:38:45.101013 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101017 | orchestrator | 2026-01-10 14:38:45.101020 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:38:45.101024 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.241) 0:03:42.928 ****** 2026-01-10 14:38:45.101028 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101032 | orchestrator | 2026-01-10 14:38:45.101035 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:38:45.101039 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.221) 0:03:43.150 ****** 2026-01-10 14:38:45.101043 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101046 | orchestrator | 2026-01-10 14:38:45.101050 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:38:45.101054 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:00.117) 0:03:43.267 ****** 2026-01-10 14:38:45.101058 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101061 | orchestrator | 2026-01-10 14:38:45.101065 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:38:45.101069 | orchestrator | Saturday 10 January 2026 14:30:42 +0000 (0:00:00.841) 0:03:44.109 ****** 2026-01-10 14:38:45.101073 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101077 | orchestrator | 2026-01-10 14:38:45.101081 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:38:45.101084 | orchestrator | Saturday 10 January 2026 14:30:43 +0000 (0:00:00.274) 0:03:44.384 ****** 2026-01-10 14:38:45.101088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.101092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.101095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.101099 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101103 | orchestrator | 2026-01-10 14:38:45.101107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:38:45.101110 | orchestrator | Saturday 10 January 2026 14:30:43 +0000 (0:00:00.445) 0:03:44.829 ****** 2026-01-10 14:38:45.101114 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101118 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.101123 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.101129 | orchestrator | 2026-01-10 14:38:45.101137 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:38:45.101147 | orchestrator | Saturday 10 January 2026 14:30:43 +0000 (0:00:00.333) 0:03:45.163 ****** 2026-01-10 14:38:45.101158 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101164 | orchestrator | 2026-01-10 14:38:45.101171 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:38:45.101203 | orchestrator | Saturday 10 January 2026 14:30:44 +0000 (0:00:00.237) 0:03:45.400 ****** 2026-01-10 14:38:45.101207 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101211 | orchestrator | 2026-01-10 14:38:45.101215 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:38:45.101218 | orchestrator | Saturday 10 January 2026 14:30:44 +0000 (0:00:00.258) 0:03:45.659 ****** 2026-01-10 14:38:45.101222 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101226 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101230 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.101237 | orchestrator | 2026-01-10 14:38:45.101241 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:38:45.101245 | orchestrator | Saturday 10 January 2026 14:30:45 +0000 (0:00:01.110) 0:03:46.770 ****** 2026-01-10 14:38:45.101248 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.101252 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.101256 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.101260 | orchestrator | 2026-01-10 14:38:45.101264 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:38:45.101267 | orchestrator | Saturday 10 January 2026 14:30:45 +0000 (0:00:00.347) 0:03:47.117 ****** 2026-01-10 14:38:45.101271 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.101275 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.101279 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.101282 | orchestrator | 2026-01-10 14:38:45.101287 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:38:45.101290 | orchestrator | Saturday 10 January 2026 14:30:47 +0000 (0:00:01.343) 0:03:48.460 ****** 2026-01-10 14:38:45.101294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.101298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.101302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.101305 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101309 | orchestrator | 2026-01-10 14:38:45.101317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:38:45.101326 | orchestrator | Saturday 10 January 2026 14:30:48 +0000 (0:00:00.958) 0:03:49.419 ****** 2026-01-10 14:38:45.101329 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.101333 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.101337 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.101341 | orchestrator | 2026-01-10 14:38:45.101344 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:38:45.101348 | orchestrator | Saturday 10 January 2026 14:30:48 +0000 (0:00:00.735) 0:03:50.155 ****** 2026-01-10 14:38:45.101352 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101356 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101360 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.101367 | orchestrator | 2026-01-10 14:38:45.101371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:38:45.101375 | orchestrator | Saturday 10 January 2026 14:30:49 +0000 (0:00:00.865) 0:03:51.020 ****** 2026-01-10 14:38:45.101378 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.101382 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.101386 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.101389 | orchestrator | 2026-01-10 14:38:45.101393 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:38:45.101402 | orchestrator | Saturday 10 January 2026 14:30:50 +0000 (0:00:00.602) 0:03:51.623 ****** 2026-01-10 14:38:45.101405 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.101409 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.101413 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.101417 | orchestrator | 2026-01-10 14:38:45.101420 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:38:45.101424 | orchestrator | Saturday 10 January 2026 14:30:51 +0000 (0:00:01.363) 0:03:52.987 ****** 2026-01-10 14:38:45.101428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.101431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.101435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.101439 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101443 | orchestrator | 2026-01-10 14:38:45.101446 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:38:45.101450 | orchestrator | Saturday 10 January 2026 14:30:52 +0000 (0:00:00.676) 0:03:53.663 ****** 2026-01-10 14:38:45.101454 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.101457 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.101461 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.101465 | orchestrator | 2026-01-10 14:38:45.101468 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-10 14:38:45.101472 | orchestrator | Saturday 10 January 2026 14:30:52 +0000 (0:00:00.495) 0:03:54.158 ****** 2026-01-10 14:38:45.101476 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101480 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.101484 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.101487 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101491 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101495 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101498 | orchestrator | 2026-01-10 14:38:45.101502 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:38:45.101506 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:01.359) 0:03:55.518 ****** 2026-01-10 14:38:45.101510 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.101513 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.101517 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.101521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.101524 | orchestrator | 2026-01-10 14:38:45.101528 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:38:45.101532 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:00.885) 0:03:56.404 ****** 2026-01-10 14:38:45.101536 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101539 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101543 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101547 | orchestrator | 2026-01-10 14:38:45.101550 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:38:45.101554 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:00.685) 0:03:57.089 ****** 2026-01-10 14:38:45.101558 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.101561 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.101565 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.101569 | orchestrator | 2026-01-10 14:38:45.101572 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:38:45.101576 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:01.349) 0:03:58.438 ****** 2026-01-10 14:38:45.101580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:38:45.101583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:38:45.101587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:38:45.101591 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101599 | orchestrator | 2026-01-10 14:38:45.101602 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:38:45.101606 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:00.484) 0:03:58.923 ****** 2026-01-10 14:38:45.101610 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101614 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101618 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101621 | orchestrator | 2026-01-10 14:38:45.101625 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-10 14:38:45.101629 | orchestrator | 2026-01-10 14:38:45.101633 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.101636 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:00.791) 0:03:59.715 ****** 2026-01-10 14:38:45.101643 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.101647 | orchestrator | 2026-01-10 14:38:45.101655 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.101675 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:00.524) 0:04:00.240 ****** 2026-01-10 14:38:45.101680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.101684 | orchestrator | 2026-01-10 14:38:45.101688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.101691 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.527) 0:04:00.767 ****** 2026-01-10 14:38:45.101695 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101699 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101703 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101706 | orchestrator | 2026-01-10 14:38:45.101710 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.101714 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:01.020) 0:04:01.788 ****** 2026-01-10 14:38:45.101717 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101721 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101725 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101729 | orchestrator | 2026-01-10 14:38:45.101733 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.101737 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:00.370) 0:04:02.159 ****** 2026-01-10 14:38:45.101740 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101744 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101748 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101752 | orchestrator | 2026-01-10 14:38:45.101755 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.101759 | orchestrator | Saturday 10 January 2026 14:31:01 +0000 (0:00:00.299) 0:04:02.458 ****** 2026-01-10 14:38:45.101763 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101766 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101770 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101774 | orchestrator | 2026-01-10 14:38:45.101778 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.101781 | orchestrator | Saturday 10 January 2026 14:31:01 +0000 (0:00:00.249) 0:04:02.707 ****** 2026-01-10 14:38:45.101785 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101789 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101792 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101796 | orchestrator | 2026-01-10 14:38:45.101800 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.101804 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:00.934) 0:04:03.642 ****** 2026-01-10 14:38:45.101807 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101811 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101815 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101821 | orchestrator | 2026-01-10 14:38:45.101825 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.101829 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:00.273) 0:04:03.915 ****** 2026-01-10 14:38:45.101833 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101837 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101840 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101844 | orchestrator | 2026-01-10 14:38:45.101848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.101852 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:00.305) 0:04:04.220 ****** 2026-01-10 14:38:45.101855 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101859 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101863 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101867 | orchestrator | 2026-01-10 14:38:45.101871 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.101874 | orchestrator | Saturday 10 January 2026 14:31:03 +0000 (0:00:00.972) 0:04:05.192 ****** 2026-01-10 14:38:45.101893 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101897 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101901 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101905 | orchestrator | 2026-01-10 14:38:45.101909 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.101912 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:01.184) 0:04:06.377 ****** 2026-01-10 14:38:45.101916 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101920 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101923 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101927 | orchestrator | 2026-01-10 14:38:45.101931 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.101935 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.329) 0:04:06.706 ****** 2026-01-10 14:38:45.101938 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.101942 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.101946 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.101949 | orchestrator | 2026-01-10 14:38:45.101953 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.101957 | orchestrator | Saturday 10 January 2026 14:31:05 +0000 (0:00:00.369) 0:04:07.076 ****** 2026-01-10 14:38:45.101960 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101964 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101968 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101972 | orchestrator | 2026-01-10 14:38:45.101975 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.101979 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:00.291) 0:04:07.367 ****** 2026-01-10 14:38:45.101983 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.101987 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.101990 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.101994 | orchestrator | 2026-01-10 14:38:45.101998 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.102001 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:00.453) 0:04:07.821 ****** 2026-01-10 14:38:45.102008 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102039 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102043 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102046 | orchestrator | 2026-01-10 14:38:45.102053 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.102057 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:00.304) 0:04:08.125 ****** 2026-01-10 14:38:45.102061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102067 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102071 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102074 | orchestrator | 2026-01-10 14:38:45.102078 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.102085 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.305) 0:04:08.431 ****** 2026-01-10 14:38:45.102089 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102093 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102097 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102101 | orchestrator | 2026-01-10 14:38:45.102104 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.102108 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.309) 0:04:08.740 ****** 2026-01-10 14:38:45.102112 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102115 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102119 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102123 | orchestrator | 2026-01-10 14:38:45.102126 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.102130 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:00.439) 0:04:09.180 ****** 2026-01-10 14:38:45.102135 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102142 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102148 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102153 | orchestrator | 2026-01-10 14:38:45.102159 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.102165 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:01.008) 0:04:10.188 ****** 2026-01-10 14:38:45.102171 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102178 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102184 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102190 | orchestrator | 2026-01-10 14:38:45.102197 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:38:45.102201 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:01.259) 0:04:11.448 ****** 2026-01-10 14:38:45.102228 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102233 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102236 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102240 | orchestrator | 2026-01-10 14:38:45.102244 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-10 14:38:45.102247 | orchestrator | Saturday 10 January 2026 14:31:11 +0000 (0:00:01.237) 0:04:12.685 ****** 2026-01-10 14:38:45.102251 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-01-10 14:38:45.102255 | orchestrator | 2026-01-10 14:38:45.102259 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-10 14:38:45.102262 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:02.023) 0:04:14.709 ****** 2026-01-10 14:38:45.102266 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102270 | orchestrator | 2026-01-10 14:38:45.102273 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-10 14:38:45.102277 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:00.153) 0:04:14.863 ****** 2026-01-10 14:38:45.102281 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:38:45.102285 | orchestrator | 2026-01-10 14:38:45.102288 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-10 14:38:45.102292 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:00.942) 0:04:15.805 ****** 2026-01-10 14:38:45.102296 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102299 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102303 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102307 | orchestrator | 2026-01-10 14:38:45.102310 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-10 14:38:45.102314 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:00.543) 0:04:16.349 ****** 2026-01-10 14:38:45.102318 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102321 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102325 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102329 | orchestrator | 2026-01-10 14:38:45.102332 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-10 14:38:45.102340 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:00.483) 0:04:16.833 ****** 2026-01-10 14:38:45.102344 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102348 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102351 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102355 | orchestrator | 2026-01-10 14:38:45.102359 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-10 14:38:45.102363 | orchestrator | Saturday 10 January 2026 14:31:17 +0000 (0:00:01.576) 0:04:18.409 ****** 2026-01-10 14:38:45.102366 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102370 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102374 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102378 | orchestrator | 2026-01-10 14:38:45.102381 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-10 14:38:45.102385 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:00.976) 0:04:19.386 ****** 2026-01-10 14:38:45.102389 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102393 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102396 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102400 | orchestrator | 2026-01-10 14:38:45.102404 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-10 14:38:45.102407 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:00.690) 0:04:20.076 ****** 2026-01-10 14:38:45.102411 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102415 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102419 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102422 | orchestrator | 2026-01-10 14:38:45.102426 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-10 14:38:45.102442 | orchestrator | Saturday 10 January 2026 14:31:19 +0000 (0:00:00.673) 0:04:20.750 ****** 2026-01-10 14:38:45.102446 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102450 | orchestrator | 2026-01-10 14:38:45.102458 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-10 14:38:45.102462 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:01.663) 0:04:22.413 ****** 2026-01-10 14:38:45.102466 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102470 | orchestrator | 2026-01-10 14:38:45.102473 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-10 14:38:45.102477 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:00.634) 0:04:23.047 ****** 2026-01-10 14:38:45.102481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.102485 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.102488 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.102492 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:38:45.102496 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-10 14:38:45.102499 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:38:45.102503 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:38:45.102507 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-01-10 14:38:45.102510 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:38:45.102514 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-10 14:38:45.102518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:38:45.102522 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-10 14:38:45.102525 | orchestrator | 2026-01-10 14:38:45.102529 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-10 14:38:45.102533 | orchestrator | Saturday 10 January 2026 14:31:25 +0000 (0:00:03.591) 0:04:26.639 ****** 2026-01-10 14:38:45.102537 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102540 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102544 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102551 | orchestrator | 2026-01-10 14:38:45.102555 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-10 14:38:45.102559 | orchestrator | Saturday 10 January 2026 14:31:26 +0000 (0:00:01.212) 0:04:27.852 ****** 2026-01-10 14:38:45.102562 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102566 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102570 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102573 | orchestrator | 2026-01-10 14:38:45.102577 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-10 14:38:45.102581 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:00.534) 0:04:28.386 ****** 2026-01-10 14:38:45.102584 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102588 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102592 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102595 | orchestrator | 2026-01-10 14:38:45.102599 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-10 14:38:45.102603 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:00.544) 0:04:28.930 ****** 2026-01-10 14:38:45.102607 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102610 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102614 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102618 | orchestrator | 2026-01-10 14:38:45.102621 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-10 14:38:45.102625 | orchestrator | Saturday 10 January 2026 14:31:29 +0000 (0:00:01.664) 0:04:30.594 ****** 2026-01-10 14:38:45.102629 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102633 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102636 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102640 | orchestrator | 2026-01-10 14:38:45.102644 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-10 14:38:45.102648 | orchestrator | Saturday 10 January 2026 14:31:30 +0000 (0:00:01.592) 0:04:32.186 ****** 2026-01-10 14:38:45.102652 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102655 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102678 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102682 | orchestrator | 2026-01-10 14:38:45.102686 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-10 14:38:45.102690 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:00.305) 0:04:32.491 ****** 2026-01-10 14:38:45.102693 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.102697 | orchestrator | 2026-01-10 14:38:45.102701 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-10 14:38:45.102705 | orchestrator | Saturday 10 January 2026 14:31:31 +0000 (0:00:00.683) 0:04:33.174 ****** 2026-01-10 14:38:45.102709 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102712 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102716 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102720 | orchestrator | 2026-01-10 14:38:45.102723 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-10 14:38:45.102727 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.365) 0:04:33.540 ****** 2026-01-10 14:38:45.102731 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102735 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102738 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102742 | orchestrator | 2026-01-10 14:38:45.102746 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-10 14:38:45.102750 | orchestrator | Saturday 10 January 2026 14:31:32 +0000 (0:00:00.369) 0:04:33.910 ****** 2026-01-10 14:38:45.102753 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.102757 | orchestrator | 2026-01-10 14:38:45.102761 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-10 14:38:45.102773 | orchestrator | Saturday 10 January 2026 14:31:33 +0000 (0:00:00.880) 0:04:34.790 ****** 2026-01-10 14:38:45.102777 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102781 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102787 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102791 | orchestrator | 2026-01-10 14:38:45.102795 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-10 14:38:45.102798 | orchestrator | Saturday 10 January 2026 14:31:35 +0000 (0:00:01.637) 0:04:36.427 ****** 2026-01-10 14:38:45.102802 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102806 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102810 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102813 | orchestrator | 2026-01-10 14:38:45.102817 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-10 14:38:45.102821 | orchestrator | Saturday 10 January 2026 14:31:36 +0000 (0:00:01.230) 0:04:37.658 ****** 2026-01-10 14:38:45.102825 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102829 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102832 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102836 | orchestrator | 2026-01-10 14:38:45.102840 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-10 14:38:45.102844 | orchestrator | Saturday 10 January 2026 14:31:38 +0000 (0:00:01.949) 0:04:39.607 ****** 2026-01-10 14:38:45.102847 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.102851 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.102855 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.102858 | orchestrator | 2026-01-10 14:38:45.102862 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-10 14:38:45.102866 | orchestrator | Saturday 10 January 2026 14:31:40 +0000 (0:00:02.201) 0:04:41.809 ****** 2026-01-10 14:38:45.102869 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.102873 | orchestrator | 2026-01-10 14:38:45.102877 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-10 14:38:45.102881 | orchestrator | Saturday 10 January 2026 14:31:41 +0000 (0:00:00.577) 0:04:42.386 ****** 2026-01-10 14:38:45.102884 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-10 14:38:45.102888 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102892 | orchestrator | 2026-01-10 14:38:45.102896 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-10 14:38:45.102900 | orchestrator | Saturday 10 January 2026 14:32:02 +0000 (0:00:21.749) 0:05:04.136 ****** 2026-01-10 14:38:45.102904 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.102907 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.102911 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.102915 | orchestrator | 2026-01-10 14:38:45.102919 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-10 14:38:45.102941 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:09.782) 0:05:13.918 ****** 2026-01-10 14:38:45.102945 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.102949 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.102953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.102957 | orchestrator | 2026-01-10 14:38:45.102960 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-10 14:38:45.102964 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.645) 0:05:14.563 ****** 2026-01-10 14:38:45.102969 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:38:45.102975 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:38:45.102985 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-10 14:38:45.102991 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-10 14:38:45.102999 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-10 14:38:45.103008 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__57261131afd260f3ae43b948ccccacc2748ec58b'}])  2026-01-10 14:38:45.103014 | orchestrator | 2026-01-10 14:38:45.103018 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.103021 | orchestrator | Saturday 10 January 2026 14:32:26 +0000 (0:00:13.624) 0:05:28.187 ****** 2026-01-10 14:38:45.103025 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103029 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103032 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103036 | orchestrator | 2026-01-10 14:38:45.103040 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:38:45.103044 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:00.484) 0:05:28.672 ****** 2026-01-10 14:38:45.103047 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.103051 | orchestrator | 2026-01-10 14:38:45.103055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:38:45.103058 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:00.870) 0:05:29.544 ****** 2026-01-10 14:38:45.103062 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103066 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103069 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103073 | orchestrator | 2026-01-10 14:38:45.103077 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:38:45.103080 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:00.321) 0:05:29.865 ****** 2026-01-10 14:38:45.103084 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103088 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103091 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103095 | orchestrator | 2026-01-10 14:38:45.103098 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:38:45.103103 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:00.373) 0:05:30.238 ****** 2026-01-10 14:38:45.103107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:38:45.103114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:38:45.103118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:38:45.103121 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103125 | orchestrator | 2026-01-10 14:38:45.103129 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:38:45.103132 | orchestrator | Saturday 10 January 2026 14:32:30 +0000 (0:00:01.268) 0:05:31.507 ****** 2026-01-10 14:38:45.103136 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103140 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103143 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103147 | orchestrator | 2026-01-10 14:38:45.103151 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-10 14:38:45.103154 | orchestrator | 2026-01-10 14:38:45.103158 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.103162 | orchestrator | Saturday 10 January 2026 14:32:30 +0000 (0:00:00.600) 0:05:32.107 ****** 2026-01-10 14:38:45.103165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.103169 | orchestrator | 2026-01-10 14:38:45.103173 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.103176 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:00.553) 0:05:32.661 ****** 2026-01-10 14:38:45.103180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.103184 | orchestrator | 2026-01-10 14:38:45.103188 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.103191 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:00.793) 0:05:33.455 ****** 2026-01-10 14:38:45.103195 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103199 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103202 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103206 | orchestrator | 2026-01-10 14:38:45.103210 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.103213 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:00.812) 0:05:34.267 ****** 2026-01-10 14:38:45.103217 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103221 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103225 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103228 | orchestrator | 2026-01-10 14:38:45.103232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.103236 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:00.338) 0:05:34.606 ****** 2026-01-10 14:38:45.103239 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103243 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103247 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103250 | orchestrator | 2026-01-10 14:38:45.103254 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.103258 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:00.701) 0:05:35.307 ****** 2026-01-10 14:38:45.103261 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103265 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103269 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103273 | orchestrator | 2026-01-10 14:38:45.103276 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.103284 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:00.354) 0:05:35.662 ****** 2026-01-10 14:38:45.103291 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103295 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103299 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103302 | orchestrator | 2026-01-10 14:38:45.103306 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.103310 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:00.729) 0:05:36.391 ****** 2026-01-10 14:38:45.103318 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103322 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103325 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103329 | orchestrator | 2026-01-10 14:38:45.103332 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.103336 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:00.327) 0:05:36.719 ****** 2026-01-10 14:38:45.103340 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103343 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103347 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103351 | orchestrator | 2026-01-10 14:38:45.103354 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.103358 | orchestrator | Saturday 10 January 2026 14:32:36 +0000 (0:00:00.625) 0:05:37.344 ****** 2026-01-10 14:38:45.103362 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103365 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103369 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103373 | orchestrator | 2026-01-10 14:38:45.103377 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.103380 | orchestrator | Saturday 10 January 2026 14:32:36 +0000 (0:00:00.746) 0:05:38.091 ****** 2026-01-10 14:38:45.103384 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103388 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103391 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103395 | orchestrator | 2026-01-10 14:38:45.103399 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.103403 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:00.734) 0:05:38.826 ****** 2026-01-10 14:38:45.103406 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103410 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103414 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103417 | orchestrator | 2026-01-10 14:38:45.103421 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.103425 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:00.351) 0:05:39.177 ****** 2026-01-10 14:38:45.103429 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103432 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103436 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103440 | orchestrator | 2026-01-10 14:38:45.103443 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.103447 | orchestrator | Saturday 10 January 2026 14:32:38 +0000 (0:00:00.615) 0:05:39.793 ****** 2026-01-10 14:38:45.103451 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103458 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103462 | orchestrator | 2026-01-10 14:38:45.103466 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.103469 | orchestrator | Saturday 10 January 2026 14:32:38 +0000 (0:00:00.354) 0:05:40.147 ****** 2026-01-10 14:38:45.103473 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103477 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103481 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103484 | orchestrator | 2026-01-10 14:38:45.103488 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.103492 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:00.329) 0:05:40.477 ****** 2026-01-10 14:38:45.103495 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103499 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103503 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103506 | orchestrator | 2026-01-10 14:38:45.103510 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.103514 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:00.327) 0:05:40.804 ****** 2026-01-10 14:38:45.103518 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103524 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103528 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103532 | orchestrator | 2026-01-10 14:38:45.103536 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.103540 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:00.284) 0:05:41.089 ****** 2026-01-10 14:38:45.103543 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103547 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103551 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103554 | orchestrator | 2026-01-10 14:38:45.103558 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.103562 | orchestrator | Saturday 10 January 2026 14:32:40 +0000 (0:00:00.679) 0:05:41.768 ****** 2026-01-10 14:38:45.103566 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103569 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103573 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103577 | orchestrator | 2026-01-10 14:38:45.103580 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.103584 | orchestrator | Saturday 10 January 2026 14:32:40 +0000 (0:00:00.349) 0:05:42.118 ****** 2026-01-10 14:38:45.103588 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103592 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103595 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103599 | orchestrator | 2026-01-10 14:38:45.103602 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.103606 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.357) 0:05:42.476 ****** 2026-01-10 14:38:45.103610 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103614 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103617 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103621 | orchestrator | 2026-01-10 14:38:45.103625 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:38:45.103629 | orchestrator | Saturday 10 January 2026 14:32:42 +0000 (0:00:00.894) 0:05:43.371 ****** 2026-01-10 14:38:45.103636 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:38:45.103641 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.103644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.103648 | orchestrator | 2026-01-10 14:38:45.103652 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-10 14:38:45.103689 | orchestrator | Saturday 10 January 2026 14:32:42 +0000 (0:00:00.661) 0:05:44.033 ****** 2026-01-10 14:38:45.103695 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.103699 | orchestrator | 2026-01-10 14:38:45.103703 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-10 14:38:45.103706 | orchestrator | Saturday 10 January 2026 14:32:43 +0000 (0:00:00.571) 0:05:44.604 ****** 2026-01-10 14:38:45.103710 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.103714 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.103718 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.103721 | orchestrator | 2026-01-10 14:38:45.103725 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-10 14:38:45.103729 | orchestrator | Saturday 10 January 2026 14:32:44 +0000 (0:00:00.709) 0:05:45.314 ****** 2026-01-10 14:38:45.103732 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103736 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103740 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103744 | orchestrator | 2026-01-10 14:38:45.103747 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-10 14:38:45.103751 | orchestrator | Saturday 10 January 2026 14:32:44 +0000 (0:00:00.466) 0:05:45.780 ****** 2026-01-10 14:38:45.103755 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.103763 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.103767 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.103771 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:38:45.103774 | orchestrator | 2026-01-10 14:38:45.103778 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-10 14:38:45.103782 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:09.458) 0:05:55.239 ****** 2026-01-10 14:38:45.103785 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103789 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103793 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103797 | orchestrator | 2026-01-10 14:38:45.103800 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-10 14:38:45.103804 | orchestrator | Saturday 10 January 2026 14:32:54 +0000 (0:00:00.367) 0:05:55.606 ****** 2026-01-10 14:38:45.103808 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:38:45.103812 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:38:45.103816 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:38:45.103820 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.103823 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.103827 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.103831 | orchestrator | 2026-01-10 14:38:45.103835 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:38:45.103839 | orchestrator | Saturday 10 January 2026 14:32:56 +0000 (0:00:02.015) 0:05:57.622 ****** 2026-01-10 14:38:45.103843 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:38:45.103846 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:38:45.103850 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:38:45.103855 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:38:45.103862 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:38:45.103868 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:38:45.103874 | orchestrator | 2026-01-10 14:38:45.103880 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-10 14:38:45.103885 | orchestrator | Saturday 10 January 2026 14:32:57 +0000 (0:00:01.381) 0:05:59.003 ****** 2026-01-10 14:38:45.103891 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.103897 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.103902 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.103908 | orchestrator | 2026-01-10 14:38:45.103915 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-10 14:38:45.103920 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.626) 0:05:59.630 ****** 2026-01-10 14:38:45.103925 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.103930 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.103939 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.103947 | orchestrator | 2026-01-10 14:38:45.103955 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-10 14:38:45.104052 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.294) 0:05:59.925 ****** 2026-01-10 14:38:45.104076 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.104083 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.104089 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.104095 | orchestrator | 2026-01-10 14:38:45.104101 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-10 14:38:45.104107 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.315) 0:06:00.240 ****** 2026-01-10 14:38:45.104112 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.104118 | orchestrator | 2026-01-10 14:38:45.104124 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-10 14:38:45.104136 | orchestrator | Saturday 10 January 2026 14:32:59 +0000 (0:00:00.811) 0:06:01.052 ****** 2026-01-10 14:38:45.104141 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.104147 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.104153 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.104159 | orchestrator | 2026-01-10 14:38:45.104174 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-10 14:38:45.104186 | orchestrator | Saturday 10 January 2026 14:33:00 +0000 (0:00:00.349) 0:06:01.401 ****** 2026-01-10 14:38:45.104191 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.104195 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.104199 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.104202 | orchestrator | 2026-01-10 14:38:45.104206 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-10 14:38:45.104210 | orchestrator | Saturday 10 January 2026 14:33:00 +0000 (0:00:00.341) 0:06:01.743 ****** 2026-01-10 14:38:45.104213 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.104217 | orchestrator | 2026-01-10 14:38:45.104221 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-10 14:38:45.104225 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:00.632) 0:06:02.376 ****** 2026-01-10 14:38:45.104229 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104232 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104236 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104239 | orchestrator | 2026-01-10 14:38:45.104243 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-10 14:38:45.104247 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:01.201) 0:06:03.577 ****** 2026-01-10 14:38:45.104250 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104254 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104258 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104261 | orchestrator | 2026-01-10 14:38:45.104265 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-10 14:38:45.104269 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:01.054) 0:06:04.632 ****** 2026-01-10 14:38:45.104273 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104276 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104280 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104283 | orchestrator | 2026-01-10 14:38:45.104287 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-10 14:38:45.104292 | orchestrator | Saturday 10 January 2026 14:33:05 +0000 (0:00:02.013) 0:06:06.646 ****** 2026-01-10 14:38:45.104298 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104304 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104313 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104320 | orchestrator | 2026-01-10 14:38:45.104326 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-10 14:38:45.104331 | orchestrator | Saturday 10 January 2026 14:33:08 +0000 (0:00:02.671) 0:06:09.317 ****** 2026-01-10 14:38:45.104338 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.104343 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.104349 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-10 14:38:45.104354 | orchestrator | 2026-01-10 14:38:45.104360 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-10 14:38:45.104365 | orchestrator | Saturday 10 January 2026 14:33:08 +0000 (0:00:00.429) 0:06:09.746 ****** 2026-01-10 14:38:45.104371 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-10 14:38:45.104377 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-10 14:38:45.104383 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-10 14:38:45.104395 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-10 14:38:45.104400 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-10 14:38:45.104406 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.104411 | orchestrator | 2026-01-10 14:38:45.104417 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-10 14:38:45.104423 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:30.113) 0:06:39.860 ****** 2026-01-10 14:38:45.104429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.104435 | orchestrator | 2026-01-10 14:38:45.104440 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-10 14:38:45.104446 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:01.433) 0:06:41.293 ****** 2026-01-10 14:38:45.104452 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.104458 | orchestrator | 2026-01-10 14:38:45.104464 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-10 14:38:45.104470 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.316) 0:06:41.610 ****** 2026-01-10 14:38:45.104476 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.104482 | orchestrator | 2026-01-10 14:38:45.104489 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-10 14:38:45.104494 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.158) 0:06:41.768 ****** 2026-01-10 14:38:45.104498 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-10 14:38:45.104501 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-10 14:38:45.104505 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-10 14:38:45.104509 | orchestrator | 2026-01-10 14:38:45.104512 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-10 14:38:45.104516 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:07.118) 0:06:48.887 ****** 2026-01-10 14:38:45.104520 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-10 14:38:45.104528 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-10 14:38:45.104536 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-10 14:38:45.104540 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-10 14:38:45.104543 | orchestrator | 2026-01-10 14:38:45.104547 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.104551 | orchestrator | Saturday 10 January 2026 14:33:53 +0000 (0:00:05.690) 0:06:54.578 ****** 2026-01-10 14:38:45.104554 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104558 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104562 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104566 | orchestrator | 2026-01-10 14:38:45.104569 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:38:45.104573 | orchestrator | Saturday 10 January 2026 14:33:53 +0000 (0:00:00.653) 0:06:55.231 ****** 2026-01-10 14:38:45.104577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-10 14:38:45.104580 | orchestrator | 2026-01-10 14:38:45.104584 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:38:45.104588 | orchestrator | Saturday 10 January 2026 14:33:54 +0000 (0:00:00.888) 0:06:56.120 ****** 2026-01-10 14:38:45.104592 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.104595 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.104599 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.104603 | orchestrator | 2026-01-10 14:38:45.104606 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:38:45.104614 | orchestrator | Saturday 10 January 2026 14:33:55 +0000 (0:00:00.378) 0:06:56.498 ****** 2026-01-10 14:38:45.104618 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.104622 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.104626 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.104629 | orchestrator | 2026-01-10 14:38:45.104633 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:38:45.104637 | orchestrator | Saturday 10 January 2026 14:33:56 +0000 (0:00:01.355) 0:06:57.854 ****** 2026-01-10 14:38:45.104640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:38:45.104644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:38:45.104648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:38:45.104651 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.104655 | orchestrator | 2026-01-10 14:38:45.104680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:38:45.104684 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:00.931) 0:06:58.786 ****** 2026-01-10 14:38:45.104688 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.104691 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.104695 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.104699 | orchestrator | 2026-01-10 14:38:45.104703 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-10 14:38:45.104706 | orchestrator | 2026-01-10 14:38:45.104710 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.104714 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.878) 0:06:59.665 ****** 2026-01-10 14:38:45.104718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.104722 | orchestrator | 2026-01-10 14:38:45.104725 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.104729 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:00.583) 0:07:00.248 ****** 2026-01-10 14:38:45.104733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.104737 | orchestrator | 2026-01-10 14:38:45.104740 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.104744 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:00.806) 0:07:01.054 ****** 2026-01-10 14:38:45.104748 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104752 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104755 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104759 | orchestrator | 2026-01-10 14:38:45.104763 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.104766 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.336) 0:07:01.391 ****** 2026-01-10 14:38:45.104770 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.104774 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.104778 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.104781 | orchestrator | 2026-01-10 14:38:45.104785 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.104789 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.708) 0:07:02.100 ****** 2026-01-10 14:38:45.104792 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.104796 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.104800 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.104803 | orchestrator | 2026-01-10 14:38:45.104807 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.104811 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:00.838) 0:07:02.939 ****** 2026-01-10 14:38:45.104815 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.104818 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.104822 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.104826 | orchestrator | 2026-01-10 14:38:45.104829 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.104838 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:01.122) 0:07:04.061 ****** 2026-01-10 14:38:45.104842 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104846 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104853 | orchestrator | 2026-01-10 14:38:45.104857 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.104861 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.338) 0:07:04.400 ****** 2026-01-10 14:38:45.104868 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104872 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104878 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104882 | orchestrator | 2026-01-10 14:38:45.104886 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.104890 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.324) 0:07:04.724 ****** 2026-01-10 14:38:45.104893 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104897 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104901 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104905 | orchestrator | 2026-01-10 14:38:45.104908 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.104912 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:00.318) 0:07:05.042 ****** 2026-01-10 14:38:45.104916 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.104919 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.104923 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.104927 | orchestrator | 2026-01-10 14:38:45.104930 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.104934 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:01.195) 0:07:06.237 ****** 2026-01-10 14:38:45.104938 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.104942 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.104945 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.104949 | orchestrator | 2026-01-10 14:38:45.104953 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.104956 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.743) 0:07:06.981 ****** 2026-01-10 14:38:45.104960 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104964 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104968 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104971 | orchestrator | 2026-01-10 14:38:45.104975 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.104979 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.344) 0:07:07.326 ****** 2026-01-10 14:38:45.104982 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.104986 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.104990 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.104993 | orchestrator | 2026-01-10 14:38:45.104997 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.105001 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.317) 0:07:07.643 ****** 2026-01-10 14:38:45.105004 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105008 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105012 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105015 | orchestrator | 2026-01-10 14:38:45.105019 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.105023 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.633) 0:07:08.277 ****** 2026-01-10 14:38:45.105026 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105030 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105034 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105038 | orchestrator | 2026-01-10 14:38:45.105041 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.105045 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.384) 0:07:08.662 ****** 2026-01-10 14:38:45.105052 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105056 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105060 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105063 | orchestrator | 2026-01-10 14:38:45.105067 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.105071 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.333) 0:07:08.995 ****** 2026-01-10 14:38:45.105075 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105078 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105082 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105086 | orchestrator | 2026-01-10 14:38:45.105089 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.105093 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:00.302) 0:07:09.298 ****** 2026-01-10 14:38:45.105097 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105101 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105104 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105108 | orchestrator | 2026-01-10 14:38:45.105112 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.105115 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:00.621) 0:07:09.919 ****** 2026-01-10 14:38:45.105119 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105123 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105126 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105130 | orchestrator | 2026-01-10 14:38:45.105134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.105137 | orchestrator | Saturday 10 January 2026 14:34:08 +0000 (0:00:00.324) 0:07:10.244 ****** 2026-01-10 14:38:45.105141 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105145 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105149 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105152 | orchestrator | 2026-01-10 14:38:45.105156 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.105160 | orchestrator | Saturday 10 January 2026 14:34:09 +0000 (0:00:00.331) 0:07:10.575 ****** 2026-01-10 14:38:45.105163 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105167 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105171 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105174 | orchestrator | 2026-01-10 14:38:45.105178 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-10 14:38:45.105182 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:00.810) 0:07:11.386 ****** 2026-01-10 14:38:45.105185 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105189 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105193 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105196 | orchestrator | 2026-01-10 14:38:45.105200 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:38:45.105204 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:00.351) 0:07:11.737 ****** 2026-01-10 14:38:45.105207 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:38:45.105215 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:38:45.105222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:38:45.105226 | orchestrator | 2026-01-10 14:38:45.105229 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-10 14:38:45.105233 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.685) 0:07:12.423 ****** 2026-01-10 14:38:45.105237 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.105240 | orchestrator | 2026-01-10 14:38:45.105244 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-10 14:38:45.105248 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.550) 0:07:12.973 ****** 2026-01-10 14:38:45.105255 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105259 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105263 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105266 | orchestrator | 2026-01-10 14:38:45.105270 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-10 14:38:45.105274 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:00.712) 0:07:13.686 ****** 2026-01-10 14:38:45.105278 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105281 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105285 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105289 | orchestrator | 2026-01-10 14:38:45.105293 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-10 14:38:45.105296 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:00.341) 0:07:14.027 ****** 2026-01-10 14:38:45.105300 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105304 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105307 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105311 | orchestrator | 2026-01-10 14:38:45.105315 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-10 14:38:45.105318 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:00.594) 0:07:14.622 ****** 2026-01-10 14:38:45.105322 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105326 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105330 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105333 | orchestrator | 2026-01-10 14:38:45.105337 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-10 14:38:45.105341 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:00.342) 0:07:14.964 ****** 2026-01-10 14:38:45.105344 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:38:45.105348 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:38:45.105352 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:38:45.105356 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:38:45.105360 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:38:45.105363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:38:45.105367 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:38:45.105371 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:38:45.105374 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:38:45.105378 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:38:45.105382 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:38:45.105385 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:38:45.105389 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:38:45.105393 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:38:45.105396 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:38:45.105400 | orchestrator | 2026-01-10 14:38:45.105404 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-10 14:38:45.105408 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:03.376) 0:07:18.341 ****** 2026-01-10 14:38:45.105411 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105415 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105422 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105425 | orchestrator | 2026-01-10 14:38:45.105429 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-10 14:38:45.105433 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:00.370) 0:07:18.712 ****** 2026-01-10 14:38:45.105437 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.105440 | orchestrator | 2026-01-10 14:38:45.105444 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-10 14:38:45.105448 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:00.541) 0:07:19.254 ****** 2026-01-10 14:38:45.105452 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:38:45.105456 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:38:45.105460 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-10 14:38:45.105467 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:38:45.105471 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-10 14:38:45.105477 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-10 14:38:45.105481 | orchestrator | 2026-01-10 14:38:45.105485 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-10 14:38:45.105489 | orchestrator | Saturday 10 January 2026 14:34:19 +0000 (0:00:01.407) 0:07:20.661 ****** 2026-01-10 14:38:45.105492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.105496 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.105500 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.105504 | orchestrator | 2026-01-10 14:38:45.105509 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:38:45.105515 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:02.648) 0:07:23.310 ****** 2026-01-10 14:38:45.105521 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:38:45.105528 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.105535 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.105541 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:38:45.105547 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:38:45.105553 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.105558 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:38:45.105564 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:38:45.105570 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.105575 | orchestrator | 2026-01-10 14:38:45.105581 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-10 14:38:45.105587 | orchestrator | Saturday 10 January 2026 14:34:23 +0000 (0:00:01.374) 0:07:24.685 ****** 2026-01-10 14:38:45.105592 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.105599 | orchestrator | 2026-01-10 14:38:45.105605 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-10 14:38:45.105611 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:02.252) 0:07:26.937 ****** 2026-01-10 14:38:45.105618 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.105624 | orchestrator | 2026-01-10 14:38:45.105630 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-10 14:38:45.105637 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:00.803) 0:07:27.741 ****** 2026-01-10 14:38:45.105643 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e', 'data_vg': 'ceph-2f4cdd2b-88b0-5432-8a57-fbfff03caf8e'}) 2026-01-10 14:38:45.105651 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-381f50a6-56c2-5a32-835b-1a08246466ad', 'data_vg': 'ceph-381f50a6-56c2-5a32-835b-1a08246466ad'}) 2026-01-10 14:38:45.105700 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f', 'data_vg': 'ceph-f26dfcab-b4e5-55cc-b0d4-5a4bbd1b375f'}) 2026-01-10 14:38:45.105708 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-aeb55798-e032-5872-951c-62472db4891e', 'data_vg': 'ceph-aeb55798-e032-5872-951c-62472db4891e'}) 2026-01-10 14:38:45.105714 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a6c1f07-f96f-5f9c-9404-64a84774a9be', 'data_vg': 'ceph-5a6c1f07-f96f-5f9c-9404-64a84774a9be'}) 2026-01-10 14:38:45.105719 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8e61bc65-6745-5d05-9905-13a4cfa0641e', 'data_vg': 'ceph-8e61bc65-6745-5d05-9905-13a4cfa0641e'}) 2026-01-10 14:38:45.105725 | orchestrator | 2026-01-10 14:38:45.105731 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-10 14:38:45.105737 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:45.161) 0:08:12.902 ****** 2026-01-10 14:38:45.105743 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105749 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105755 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105762 | orchestrator | 2026-01-10 14:38:45.105768 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-10 14:38:45.105774 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:00.337) 0:08:13.240 ****** 2026-01-10 14:38:45.105780 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.105787 | orchestrator | 2026-01-10 14:38:45.105793 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-10 14:38:45.105800 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:00.811) 0:08:14.051 ****** 2026-01-10 14:38:45.105804 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105808 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105812 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105816 | orchestrator | 2026-01-10 14:38:45.105819 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-10 14:38:45.105823 | orchestrator | Saturday 10 January 2026 14:35:13 +0000 (0:00:00.711) 0:08:14.763 ****** 2026-01-10 14:38:45.105827 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.105830 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.105834 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.105838 | orchestrator | 2026-01-10 14:38:45.105841 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-10 14:38:45.105845 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:02.745) 0:08:17.508 ****** 2026-01-10 14:38:45.105849 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.105853 | orchestrator | 2026-01-10 14:38:45.105861 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-10 14:38:45.105872 | orchestrator | Saturday 10 January 2026 14:35:17 +0000 (0:00:00.848) 0:08:18.356 ****** 2026-01-10 14:38:45.105876 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.105879 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.105883 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.105887 | orchestrator | 2026-01-10 14:38:45.105890 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-10 14:38:45.105894 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:01.245) 0:08:19.602 ****** 2026-01-10 14:38:45.105898 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.105901 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.105905 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.105909 | orchestrator | 2026-01-10 14:38:45.105912 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-10 14:38:45.105916 | orchestrator | Saturday 10 January 2026 14:35:19 +0000 (0:00:01.288) 0:08:20.891 ****** 2026-01-10 14:38:45.105920 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.105927 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.105931 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.105935 | orchestrator | 2026-01-10 14:38:45.105939 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-10 14:38:45.105942 | orchestrator | Saturday 10 January 2026 14:35:21 +0000 (0:00:01.988) 0:08:22.880 ****** 2026-01-10 14:38:45.105946 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105950 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105953 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105957 | orchestrator | 2026-01-10 14:38:45.105961 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-10 14:38:45.105964 | orchestrator | Saturday 10 January 2026 14:35:22 +0000 (0:00:00.602) 0:08:23.482 ****** 2026-01-10 14:38:45.105968 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.105972 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.105975 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.105979 | orchestrator | 2026-01-10 14:38:45.105982 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-10 14:38:45.105986 | orchestrator | Saturday 10 January 2026 14:35:22 +0000 (0:00:00.362) 0:08:23.845 ****** 2026-01-10 14:38:45.105990 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:38:45.105994 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-10 14:38:45.105997 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-10 14:38:45.106001 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-10 14:38:45.106005 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-10 14:38:45.106008 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-01-10 14:38:45.106045 | orchestrator | 2026-01-10 14:38:45.106050 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-10 14:38:45.106054 | orchestrator | Saturday 10 January 2026 14:35:23 +0000 (0:00:01.129) 0:08:24.974 ****** 2026-01-10 14:38:45.106061 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:38:45.106065 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-10 14:38:45.106068 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-10 14:38:45.106072 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-10 14:38:45.106076 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-10 14:38:45.106080 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-10 14:38:45.106083 | orchestrator | 2026-01-10 14:38:45.106087 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-10 14:38:45.106091 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:02.260) 0:08:27.235 ****** 2026-01-10 14:38:45.106095 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:38:45.106099 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-10 14:38:45.106102 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-10 14:38:45.106106 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-10 14:38:45.106110 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-10 14:38:45.106113 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-10 14:38:45.106117 | orchestrator | 2026-01-10 14:38:45.106121 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-10 14:38:45.106124 | orchestrator | Saturday 10 January 2026 14:35:30 +0000 (0:00:04.158) 0:08:31.393 ****** 2026-01-10 14:38:45.106128 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106132 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.106139 | orchestrator | 2026-01-10 14:38:45.106143 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-10 14:38:45.106146 | orchestrator | Saturday 10 January 2026 14:35:33 +0000 (0:00:02.995) 0:08:34.388 ****** 2026-01-10 14:38:45.106150 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106155 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106161 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-10 14:38:45.106171 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.106182 | orchestrator | 2026-01-10 14:38:45.106190 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-10 14:38:45.106195 | orchestrator | Saturday 10 January 2026 14:35:45 +0000 (0:00:12.632) 0:08:47.021 ****** 2026-01-10 14:38:45.106201 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106206 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106213 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106219 | orchestrator | 2026-01-10 14:38:45.106224 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.106230 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:01.165) 0:08:48.187 ****** 2026-01-10 14:38:45.106236 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106241 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106248 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106254 | orchestrator | 2026-01-10 14:38:45.106260 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:38:45.106271 | orchestrator | Saturday 10 January 2026 14:35:47 +0000 (0:00:00.403) 0:08:48.590 ****** 2026-01-10 14:38:45.106282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.106288 | orchestrator | 2026-01-10 14:38:45.106294 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:38:45.106302 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.818) 0:08:49.408 ****** 2026-01-10 14:38:45.106306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.106310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.106314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.106317 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106321 | orchestrator | 2026-01-10 14:38:45.106325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:38:45.106328 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.431) 0:08:49.840 ****** 2026-01-10 14:38:45.106332 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106336 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106339 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106343 | orchestrator | 2026-01-10 14:38:45.106347 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:38:45.106351 | orchestrator | Saturday 10 January 2026 14:35:48 +0000 (0:00:00.326) 0:08:50.167 ****** 2026-01-10 14:38:45.106355 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106358 | orchestrator | 2026-01-10 14:38:45.106362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:38:45.106366 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:00.244) 0:08:50.411 ****** 2026-01-10 14:38:45.106370 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106373 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106377 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106381 | orchestrator | 2026-01-10 14:38:45.106384 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:38:45.106388 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:00.318) 0:08:50.729 ****** 2026-01-10 14:38:45.106392 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106395 | orchestrator | 2026-01-10 14:38:45.106399 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:38:45.106403 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:00.259) 0:08:50.989 ****** 2026-01-10 14:38:45.106407 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106411 | orchestrator | 2026-01-10 14:38:45.106414 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:38:45.106423 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:00.279) 0:08:51.269 ****** 2026-01-10 14:38:45.106427 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106431 | orchestrator | 2026-01-10 14:38:45.106434 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:38:45.106438 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:00.143) 0:08:51.412 ****** 2026-01-10 14:38:45.106442 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106445 | orchestrator | 2026-01-10 14:38:45.106449 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:38:45.106453 | orchestrator | Saturday 10 January 2026 14:35:51 +0000 (0:00:00.890) 0:08:52.303 ****** 2026-01-10 14:38:45.106459 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106465 | orchestrator | 2026-01-10 14:38:45.106471 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:38:45.106477 | orchestrator | Saturday 10 January 2026 14:35:51 +0000 (0:00:00.240) 0:08:52.543 ****** 2026-01-10 14:38:45.106482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.106488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.106493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.106499 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106505 | orchestrator | 2026-01-10 14:38:45.106511 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:38:45.106516 | orchestrator | Saturday 10 January 2026 14:35:51 +0000 (0:00:00.442) 0:08:52.985 ****** 2026-01-10 14:38:45.106522 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106527 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106533 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106540 | orchestrator | 2026-01-10 14:38:45.106546 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:38:45.106552 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:00.347) 0:08:53.333 ****** 2026-01-10 14:38:45.106558 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106564 | orchestrator | 2026-01-10 14:38:45.106567 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:38:45.106571 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:00.238) 0:08:53.571 ****** 2026-01-10 14:38:45.106575 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106579 | orchestrator | 2026-01-10 14:38:45.106583 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-10 14:38:45.106586 | orchestrator | 2026-01-10 14:38:45.106590 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.106594 | orchestrator | Saturday 10 January 2026 14:35:53 +0000 (0:00:01.003) 0:08:54.575 ****** 2026-01-10 14:38:45.106598 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.106603 | orchestrator | 2026-01-10 14:38:45.106607 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.106611 | orchestrator | Saturday 10 January 2026 14:35:54 +0000 (0:00:01.232) 0:08:55.807 ****** 2026-01-10 14:38:45.106618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.106622 | orchestrator | 2026-01-10 14:38:45.106629 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.106633 | orchestrator | Saturday 10 January 2026 14:35:55 +0000 (0:00:01.342) 0:08:57.149 ****** 2026-01-10 14:38:45.106637 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106641 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106645 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106648 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.106652 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.106678 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.106686 | orchestrator | 2026-01-10 14:38:45.106690 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.106693 | orchestrator | Saturday 10 January 2026 14:35:56 +0000 (0:00:01.067) 0:08:58.217 ****** 2026-01-10 14:38:45.106697 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.106701 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.106705 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.106709 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.106712 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.106716 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.106720 | orchestrator | 2026-01-10 14:38:45.106724 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.106727 | orchestrator | Saturday 10 January 2026 14:35:57 +0000 (0:00:00.754) 0:08:58.971 ****** 2026-01-10 14:38:45.106731 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.106735 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.106739 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.106742 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.106746 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.106750 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.106754 | orchestrator | 2026-01-10 14:38:45.106757 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.106761 | orchestrator | Saturday 10 January 2026 14:35:58 +0000 (0:00:01.069) 0:09:00.041 ****** 2026-01-10 14:38:45.106765 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.106768 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.106772 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.106776 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.106780 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.106783 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.106787 | orchestrator | 2026-01-10 14:38:45.106791 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.106797 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:00.799) 0:09:00.841 ****** 2026-01-10 14:38:45.106805 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106814 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106820 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106826 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.106831 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.106837 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.106843 | orchestrator | 2026-01-10 14:38:45.106848 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.106855 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:01.429) 0:09:02.270 ****** 2026-01-10 14:38:45.106861 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106866 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106873 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106879 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.106886 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.106890 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.106893 | orchestrator | 2026-01-10 14:38:45.106897 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.106901 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.634) 0:09:02.905 ****** 2026-01-10 14:38:45.106904 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.106908 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.106912 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.106916 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.106919 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.106923 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.106927 | orchestrator | 2026-01-10 14:38:45.106930 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.106934 | orchestrator | Saturday 10 January 2026 14:36:02 +0000 (0:00:00.977) 0:09:03.883 ****** 2026-01-10 14:38:45.106946 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.106950 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.106953 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.106957 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.106961 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.106965 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.106968 | orchestrator | 2026-01-10 14:38:45.106972 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.106976 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:01.337) 0:09:05.221 ****** 2026-01-10 14:38:45.106979 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.106983 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.106987 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.106990 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.106994 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.106998 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107001 | orchestrator | 2026-01-10 14:38:45.107005 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.107009 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:02.035) 0:09:07.256 ****** 2026-01-10 14:38:45.107012 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107016 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107020 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107024 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107027 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107031 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107035 | orchestrator | 2026-01-10 14:38:45.107039 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.107043 | orchestrator | Saturday 10 January 2026 14:36:06 +0000 (0:00:00.566) 0:09:07.823 ****** 2026-01-10 14:38:45.107046 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107050 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107058 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107062 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107066 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107074 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107079 | orchestrator | 2026-01-10 14:38:45.107085 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.107091 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.648) 0:09:08.471 ****** 2026-01-10 14:38:45.107097 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107103 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107108 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107114 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107120 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107126 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107131 | orchestrator | 2026-01-10 14:38:45.107135 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.107139 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.431) 0:09:08.903 ****** 2026-01-10 14:38:45.107143 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107146 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107150 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107153 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107157 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107161 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107164 | orchestrator | 2026-01-10 14:38:45.107168 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.107172 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.664) 0:09:09.568 ****** 2026-01-10 14:38:45.107175 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107179 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107183 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107186 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107195 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107198 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107202 | orchestrator | 2026-01-10 14:38:45.107206 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.107210 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.434) 0:09:10.002 ****** 2026-01-10 14:38:45.107214 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107217 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107221 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107224 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107229 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107232 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107236 | orchestrator | 2026-01-10 14:38:45.107240 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.107244 | orchestrator | Saturday 10 January 2026 14:36:09 +0000 (0:00:00.625) 0:09:10.628 ****** 2026-01-10 14:38:45.107248 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107251 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107255 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107259 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:45.107262 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:45.107266 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:45.107270 | orchestrator | 2026-01-10 14:38:45.107273 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.107277 | orchestrator | Saturday 10 January 2026 14:36:09 +0000 (0:00:00.484) 0:09:11.112 ****** 2026-01-10 14:38:45.107281 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107285 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107288 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107292 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107296 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107299 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107303 | orchestrator | 2026-01-10 14:38:45.107307 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.107310 | orchestrator | Saturday 10 January 2026 14:36:10 +0000 (0:00:00.653) 0:09:11.765 ****** 2026-01-10 14:38:45.107314 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107318 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107321 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107325 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107329 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107332 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107336 | orchestrator | 2026-01-10 14:38:45.107340 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.107343 | orchestrator | Saturday 10 January 2026 14:36:11 +0000 (0:00:00.569) 0:09:12.334 ****** 2026-01-10 14:38:45.107347 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107350 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107354 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107358 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107362 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107366 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107369 | orchestrator | 2026-01-10 14:38:45.107373 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-10 14:38:45.107377 | orchestrator | Saturday 10 January 2026 14:36:12 +0000 (0:00:01.102) 0:09:13.437 ****** 2026-01-10 14:38:45.107380 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.107384 | orchestrator | 2026-01-10 14:38:45.107388 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-10 14:38:45.107392 | orchestrator | Saturday 10 January 2026 14:36:16 +0000 (0:00:04.153) 0:09:17.590 ****** 2026-01-10 14:38:45.107395 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.107399 | orchestrator | 2026-01-10 14:38:45.107403 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-10 14:38:45.107410 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:02.100) 0:09:19.691 ****** 2026-01-10 14:38:45.107414 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.107418 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.107422 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.107426 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.107429 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.107433 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107437 | orchestrator | 2026-01-10 14:38:45.107440 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-10 14:38:45.107444 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:02.013) 0:09:21.704 ****** 2026-01-10 14:38:45.107452 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.107456 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.107462 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.107466 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.107470 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.107474 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.107478 | orchestrator | 2026-01-10 14:38:45.107482 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-10 14:38:45.107485 | orchestrator | Saturday 10 January 2026 14:36:21 +0000 (0:00:01.180) 0:09:22.884 ****** 2026-01-10 14:38:45.107490 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.107495 | orchestrator | 2026-01-10 14:38:45.107499 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-10 14:38:45.107503 | orchestrator | Saturday 10 January 2026 14:36:22 +0000 (0:00:01.127) 0:09:24.012 ****** 2026-01-10 14:38:45.107506 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.107510 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.107514 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.107517 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.107521 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.107525 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.107528 | orchestrator | 2026-01-10 14:38:45.107532 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-10 14:38:45.107536 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:02.111) 0:09:26.124 ****** 2026-01-10 14:38:45.107540 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.107543 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.107547 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.107551 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.107554 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.107558 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.107562 | orchestrator | 2026-01-10 14:38:45.107565 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-10 14:38:45.107569 | orchestrator | Saturday 10 January 2026 14:36:29 +0000 (0:00:04.254) 0:09:30.378 ****** 2026-01-10 14:38:45.107573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:45.107577 | orchestrator | 2026-01-10 14:38:45.107581 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-10 14:38:45.107584 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:02.005) 0:09:32.384 ****** 2026-01-10 14:38:45.107588 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107592 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107596 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107599 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107603 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107607 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107610 | orchestrator | 2026-01-10 14:38:45.107614 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-10 14:38:45.107622 | orchestrator | Saturday 10 January 2026 14:36:32 +0000 (0:00:01.676) 0:09:34.060 ****** 2026-01-10 14:38:45.107626 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.107630 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.107633 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.107637 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:45.107641 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:45.107644 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:45.107648 | orchestrator | 2026-01-10 14:38:45.107652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-10 14:38:45.107655 | orchestrator | Saturday 10 January 2026 14:36:36 +0000 (0:00:03.383) 0:09:37.443 ****** 2026-01-10 14:38:45.107693 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107697 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107700 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107704 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:45.107708 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:45.107712 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:45.107715 | orchestrator | 2026-01-10 14:38:45.107719 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-10 14:38:45.107723 | orchestrator | 2026-01-10 14:38:45.107726 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.107730 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:01.188) 0:09:38.632 ****** 2026-01-10 14:38:45.107734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.107738 | orchestrator | 2026-01-10 14:38:45.107742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.107746 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:00.490) 0:09:39.122 ****** 2026-01-10 14:38:45.107750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.107754 | orchestrator | 2026-01-10 14:38:45.107758 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.107761 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:00.796) 0:09:39.919 ****** 2026-01-10 14:38:45.107765 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107769 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107772 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107776 | orchestrator | 2026-01-10 14:38:45.107780 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.107784 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:00.311) 0:09:40.230 ****** 2026-01-10 14:38:45.107787 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107791 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107795 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107799 | orchestrator | 2026-01-10 14:38:45.107803 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.107810 | orchestrator | Saturday 10 January 2026 14:36:39 +0000 (0:00:00.727) 0:09:40.958 ****** 2026-01-10 14:38:45.107814 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107818 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107825 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107829 | orchestrator | 2026-01-10 14:38:45.107833 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.107836 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:01.041) 0:09:41.999 ****** 2026-01-10 14:38:45.107840 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107844 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107848 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107852 | orchestrator | 2026-01-10 14:38:45.107855 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.107859 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.744) 0:09:42.744 ****** 2026-01-10 14:38:45.107868 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107871 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107875 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107879 | orchestrator | 2026-01-10 14:38:45.107883 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.107886 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.318) 0:09:43.062 ****** 2026-01-10 14:38:45.107890 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107894 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107898 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107904 | orchestrator | 2026-01-10 14:38:45.107910 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.107917 | orchestrator | Saturday 10 January 2026 14:36:42 +0000 (0:00:00.384) 0:09:43.447 ****** 2026-01-10 14:38:45.107922 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.107928 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.107934 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.107939 | orchestrator | 2026-01-10 14:38:45.107945 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.107951 | orchestrator | Saturday 10 January 2026 14:36:42 +0000 (0:00:00.700) 0:09:44.147 ****** 2026-01-10 14:38:45.107956 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107962 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.107968 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.107973 | orchestrator | 2026-01-10 14:38:45.107979 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.107985 | orchestrator | Saturday 10 January 2026 14:36:43 +0000 (0:00:00.768) 0:09:44.915 ****** 2026-01-10 14:38:45.107991 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.107997 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108003 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108009 | orchestrator | 2026-01-10 14:38:45.108015 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.108021 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:00.766) 0:09:45.682 ****** 2026-01-10 14:38:45.108027 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108033 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108039 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108045 | orchestrator | 2026-01-10 14:38:45.108052 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.108056 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:00.326) 0:09:46.008 ****** 2026-01-10 14:38:45.108106 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108114 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108120 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108126 | orchestrator | 2026-01-10 14:38:45.108133 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.108139 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:00.749) 0:09:46.758 ****** 2026-01-10 14:38:45.108145 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108151 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108156 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108162 | orchestrator | 2026-01-10 14:38:45.108169 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.108174 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:00.380) 0:09:47.138 ****** 2026-01-10 14:38:45.108181 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108187 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108193 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108199 | orchestrator | 2026-01-10 14:38:45.108206 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.108212 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.373) 0:09:47.512 ****** 2026-01-10 14:38:45.108219 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108235 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108242 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108248 | orchestrator | 2026-01-10 14:38:45.108255 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.108261 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.342) 0:09:47.854 ****** 2026-01-10 14:38:45.108266 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108270 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108274 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108278 | orchestrator | 2026-01-10 14:38:45.108282 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.108285 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.665) 0:09:48.519 ****** 2026-01-10 14:38:45.108289 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108293 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108296 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108300 | orchestrator | 2026-01-10 14:38:45.108304 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.108308 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.331) 0:09:48.851 ****** 2026-01-10 14:38:45.108312 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108315 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108319 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108323 | orchestrator | 2026-01-10 14:38:45.108326 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.108330 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.336) 0:09:49.188 ****** 2026-01-10 14:38:45.108334 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108344 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108348 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108351 | orchestrator | 2026-01-10 14:38:45.108359 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.108363 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.349) 0:09:49.538 ****** 2026-01-10 14:38:45.108367 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108371 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108375 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108378 | orchestrator | 2026-01-10 14:38:45.108382 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-10 14:38:45.108386 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:00.961) 0:09:50.500 ****** 2026-01-10 14:38:45.108390 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108393 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108397 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-10 14:38:45.108401 | orchestrator | 2026-01-10 14:38:45.108405 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-10 14:38:45.108409 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:00.431) 0:09:50.932 ****** 2026-01-10 14:38:45.108412 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.108416 | orchestrator | 2026-01-10 14:38:45.108420 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-10 14:38:45.108424 | orchestrator | Saturday 10 January 2026 14:36:51 +0000 (0:00:02.342) 0:09:53.275 ****** 2026-01-10 14:38:45.108429 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-10 14:38:45.108435 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108438 | orchestrator | 2026-01-10 14:38:45.108442 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-10 14:38:45.108446 | orchestrator | Saturday 10 January 2026 14:36:52 +0000 (0:00:00.518) 0:09:53.793 ****** 2026-01-10 14:38:45.108451 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:38:45.108465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:38:45.108469 | orchestrator | 2026-01-10 14:38:45.108472 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-10 14:38:45.108476 | orchestrator | Saturday 10 January 2026 14:37:00 +0000 (0:00:08.324) 0:10:02.118 ****** 2026-01-10 14:38:45.108480 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:38:45.108484 | orchestrator | 2026-01-10 14:38:45.108487 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-10 14:38:45.108491 | orchestrator | Saturday 10 January 2026 14:37:04 +0000 (0:00:03.820) 0:10:05.938 ****** 2026-01-10 14:38:45.108495 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.108499 | orchestrator | 2026-01-10 14:38:45.108502 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-10 14:38:45.108506 | orchestrator | Saturday 10 January 2026 14:37:05 +0000 (0:00:00.607) 0:10:06.546 ****** 2026-01-10 14:38:45.108510 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:38:45.108514 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:38:45.108517 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:38:45.108521 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-10 14:38:45.108525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-10 14:38:45.108529 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-10 14:38:45.108532 | orchestrator | 2026-01-10 14:38:45.108536 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-10 14:38:45.108540 | orchestrator | Saturday 10 January 2026 14:37:06 +0000 (0:00:01.153) 0:10:07.699 ****** 2026-01-10 14:38:45.108543 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.108547 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.108551 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.108555 | orchestrator | 2026-01-10 14:38:45.108560 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:38:45.108566 | orchestrator | Saturday 10 January 2026 14:37:09 +0000 (0:00:02.881) 0:10:10.581 ****** 2026-01-10 14:38:45.108577 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:38:45.108583 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.108588 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108594 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:38:45.108600 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:38:45.108605 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108611 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:38:45.108622 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:38:45.108628 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108633 | orchestrator | 2026-01-10 14:38:45.108645 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-10 14:38:45.108651 | orchestrator | Saturday 10 January 2026 14:37:10 +0000 (0:00:01.282) 0:10:11.864 ****** 2026-01-10 14:38:45.108673 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108680 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108685 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108698 | orchestrator | 2026-01-10 14:38:45.108703 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-10 14:38:45.108709 | orchestrator | Saturday 10 January 2026 14:37:13 +0000 (0:00:03.145) 0:10:15.009 ****** 2026-01-10 14:38:45.108715 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.108721 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.108737 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.108750 | orchestrator | 2026-01-10 14:38:45.108756 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-10 14:38:45.108762 | orchestrator | Saturday 10 January 2026 14:37:14 +0000 (0:00:00.312) 0:10:15.321 ****** 2026-01-10 14:38:45.108769 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.108775 | orchestrator | 2026-01-10 14:38:45.108781 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-10 14:38:45.108788 | orchestrator | Saturday 10 January 2026 14:37:14 +0000 (0:00:00.755) 0:10:16.077 ****** 2026-01-10 14:38:45.108794 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.108799 | orchestrator | 2026-01-10 14:38:45.108805 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-10 14:38:45.108811 | orchestrator | Saturday 10 January 2026 14:37:15 +0000 (0:00:00.533) 0:10:16.611 ****** 2026-01-10 14:38:45.108817 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108823 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108829 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108836 | orchestrator | 2026-01-10 14:38:45.108840 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-10 14:38:45.108844 | orchestrator | Saturday 10 January 2026 14:37:16 +0000 (0:00:01.331) 0:10:17.943 ****** 2026-01-10 14:38:45.108848 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108851 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108855 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108859 | orchestrator | 2026-01-10 14:38:45.108862 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-10 14:38:45.108866 | orchestrator | Saturday 10 January 2026 14:37:18 +0000 (0:00:01.609) 0:10:19.552 ****** 2026-01-10 14:38:45.108870 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108873 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108877 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108880 | orchestrator | 2026-01-10 14:38:45.108884 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-10 14:38:45.108888 | orchestrator | Saturday 10 January 2026 14:37:20 +0000 (0:00:02.078) 0:10:21.631 ****** 2026-01-10 14:38:45.108891 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108895 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108899 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108902 | orchestrator | 2026-01-10 14:38:45.108906 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-10 14:38:45.108910 | orchestrator | Saturday 10 January 2026 14:37:22 +0000 (0:00:02.334) 0:10:23.966 ****** 2026-01-10 14:38:45.108913 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.108917 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.108921 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.108924 | orchestrator | 2026-01-10 14:38:45.108928 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.108932 | orchestrator | Saturday 10 January 2026 14:37:24 +0000 (0:00:01.534) 0:10:25.500 ****** 2026-01-10 14:38:45.108935 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.108939 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.108943 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.108946 | orchestrator | 2026-01-10 14:38:45.108950 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:38:45.108956 | orchestrator | Saturday 10 January 2026 14:37:24 +0000 (0:00:00.659) 0:10:26.160 ****** 2026-01-10 14:38:45.108971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.108979 | orchestrator | 2026-01-10 14:38:45.108985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:38:45.108991 | orchestrator | Saturday 10 January 2026 14:37:25 +0000 (0:00:00.837) 0:10:26.997 ****** 2026-01-10 14:38:45.108999 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109003 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109007 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109010 | orchestrator | 2026-01-10 14:38:45.109014 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:38:45.109018 | orchestrator | Saturday 10 January 2026 14:37:26 +0000 (0:00:00.437) 0:10:27.435 ****** 2026-01-10 14:38:45.109021 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.109025 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.109029 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.109035 | orchestrator | 2026-01-10 14:38:45.109041 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:38:45.109046 | orchestrator | Saturday 10 January 2026 14:37:27 +0000 (0:00:01.465) 0:10:28.901 ****** 2026-01-10 14:38:45.109052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.109058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.109063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.109069 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109075 | orchestrator | 2026-01-10 14:38:45.109081 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:38:45.109093 | orchestrator | Saturday 10 January 2026 14:37:28 +0000 (0:00:00.907) 0:10:29.809 ****** 2026-01-10 14:38:45.109105 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109111 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109118 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109124 | orchestrator | 2026-01-10 14:38:45.109132 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:38:45.109138 | orchestrator | 2026-01-10 14:38:45.109144 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:38:45.109147 | orchestrator | Saturday 10 January 2026 14:37:29 +0000 (0:00:00.846) 0:10:30.655 ****** 2026-01-10 14:38:45.109151 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.109155 | orchestrator | 2026-01-10 14:38:45.109159 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:38:45.109163 | orchestrator | Saturday 10 January 2026 14:37:29 +0000 (0:00:00.537) 0:10:31.193 ****** 2026-01-10 14:38:45.109166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.109170 | orchestrator | 2026-01-10 14:38:45.109174 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:38:45.109177 | orchestrator | Saturday 10 January 2026 14:37:30 +0000 (0:00:00.778) 0:10:31.971 ****** 2026-01-10 14:38:45.109181 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109185 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109188 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109192 | orchestrator | 2026-01-10 14:38:45.109196 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:38:45.109199 | orchestrator | Saturday 10 January 2026 14:37:31 +0000 (0:00:00.338) 0:10:32.310 ****** 2026-01-10 14:38:45.109203 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109207 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109210 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109214 | orchestrator | 2026-01-10 14:38:45.109218 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:38:45.109226 | orchestrator | Saturday 10 January 2026 14:37:31 +0000 (0:00:00.808) 0:10:33.118 ****** 2026-01-10 14:38:45.109229 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109233 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109237 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109241 | orchestrator | 2026-01-10 14:38:45.109244 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:38:45.109248 | orchestrator | Saturday 10 January 2026 14:37:32 +0000 (0:00:01.043) 0:10:34.162 ****** 2026-01-10 14:38:45.109252 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109256 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109259 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109263 | orchestrator | 2026-01-10 14:38:45.109267 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:38:45.109271 | orchestrator | Saturday 10 January 2026 14:37:33 +0000 (0:00:00.764) 0:10:34.927 ****** 2026-01-10 14:38:45.109275 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109278 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109282 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109286 | orchestrator | 2026-01-10 14:38:45.109290 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:38:45.109293 | orchestrator | Saturday 10 January 2026 14:37:33 +0000 (0:00:00.305) 0:10:35.233 ****** 2026-01-10 14:38:45.109297 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109301 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109305 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109308 | orchestrator | 2026-01-10 14:38:45.109312 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:38:45.109316 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:00.321) 0:10:35.554 ****** 2026-01-10 14:38:45.109320 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109323 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109327 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109331 | orchestrator | 2026-01-10 14:38:45.109335 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:38:45.109338 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:00.624) 0:10:36.179 ****** 2026-01-10 14:38:45.109342 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109346 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109352 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109358 | orchestrator | 2026-01-10 14:38:45.109363 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:38:45.109369 | orchestrator | Saturday 10 January 2026 14:37:35 +0000 (0:00:00.823) 0:10:37.002 ****** 2026-01-10 14:38:45.109376 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109382 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109388 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109393 | orchestrator | 2026-01-10 14:38:45.109397 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:38:45.109401 | orchestrator | Saturday 10 January 2026 14:37:36 +0000 (0:00:00.780) 0:10:37.783 ****** 2026-01-10 14:38:45.109405 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109408 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109412 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109416 | orchestrator | 2026-01-10 14:38:45.109419 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:38:45.109423 | orchestrator | Saturday 10 January 2026 14:37:36 +0000 (0:00:00.329) 0:10:38.113 ****** 2026-01-10 14:38:45.109427 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109431 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109434 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109438 | orchestrator | 2026-01-10 14:38:45.109442 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:38:45.109445 | orchestrator | Saturday 10 January 2026 14:37:37 +0000 (0:00:00.569) 0:10:38.683 ****** 2026-01-10 14:38:45.109452 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109456 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109460 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109463 | orchestrator | 2026-01-10 14:38:45.109471 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:38:45.109477 | orchestrator | Saturday 10 January 2026 14:37:37 +0000 (0:00:00.362) 0:10:39.045 ****** 2026-01-10 14:38:45.109481 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109485 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109489 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109493 | orchestrator | 2026-01-10 14:38:45.109499 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:38:45.109505 | orchestrator | Saturday 10 January 2026 14:37:38 +0000 (0:00:00.340) 0:10:39.386 ****** 2026-01-10 14:38:45.109511 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109517 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109522 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109526 | orchestrator | 2026-01-10 14:38:45.109530 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:38:45.109534 | orchestrator | Saturday 10 January 2026 14:37:38 +0000 (0:00:00.382) 0:10:39.768 ****** 2026-01-10 14:38:45.109537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109541 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109545 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109548 | orchestrator | 2026-01-10 14:38:45.109552 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:38:45.109556 | orchestrator | Saturday 10 January 2026 14:37:39 +0000 (0:00:00.585) 0:10:40.353 ****** 2026-01-10 14:38:45.109559 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109563 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109567 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109571 | orchestrator | 2026-01-10 14:38:45.109574 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:38:45.109578 | orchestrator | Saturday 10 January 2026 14:37:39 +0000 (0:00:00.310) 0:10:40.664 ****** 2026-01-10 14:38:45.109582 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109586 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109589 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109593 | orchestrator | 2026-01-10 14:38:45.109597 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:38:45.109600 | orchestrator | Saturday 10 January 2026 14:37:39 +0000 (0:00:00.320) 0:10:40.984 ****** 2026-01-10 14:38:45.109604 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109608 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109611 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109615 | orchestrator | 2026-01-10 14:38:45.109619 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:38:45.109622 | orchestrator | Saturday 10 January 2026 14:37:40 +0000 (0:00:00.339) 0:10:41.324 ****** 2026-01-10 14:38:45.109626 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.109630 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.109633 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.109637 | orchestrator | 2026-01-10 14:38:45.109641 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-10 14:38:45.109645 | orchestrator | Saturday 10 January 2026 14:37:40 +0000 (0:00:00.890) 0:10:42.214 ****** 2026-01-10 14:38:45.109648 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.109652 | orchestrator | 2026-01-10 14:38:45.109670 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:38:45.109675 | orchestrator | Saturday 10 January 2026 14:37:41 +0000 (0:00:00.563) 0:10:42.777 ****** 2026-01-10 14:38:45.109679 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109683 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.109690 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.109694 | orchestrator | 2026-01-10 14:38:45.109698 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:38:45.109702 | orchestrator | Saturday 10 January 2026 14:37:43 +0000 (0:00:02.361) 0:10:45.139 ****** 2026-01-10 14:38:45.109705 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:38:45.109709 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:38:45.109713 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.109717 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:38:45.109720 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:38:45.109724 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.109728 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:38:45.109731 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:38:45.109735 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.109739 | orchestrator | 2026-01-10 14:38:45.109742 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-10 14:38:45.109746 | orchestrator | Saturday 10 January 2026 14:37:45 +0000 (0:00:01.555) 0:10:46.695 ****** 2026-01-10 14:38:45.109750 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109754 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.109757 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.109761 | orchestrator | 2026-01-10 14:38:45.109764 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-10 14:38:45.109768 | orchestrator | Saturday 10 January 2026 14:37:45 +0000 (0:00:00.352) 0:10:47.048 ****** 2026-01-10 14:38:45.109772 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.109776 | orchestrator | 2026-01-10 14:38:45.109779 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-10 14:38:45.109783 | orchestrator | Saturday 10 January 2026 14:37:46 +0000 (0:00:00.653) 0:10:47.701 ****** 2026-01-10 14:38:45.109787 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.109795 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.109803 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.109807 | orchestrator | 2026-01-10 14:38:45.109811 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-10 14:38:45.109815 | orchestrator | Saturday 10 January 2026 14:37:48 +0000 (0:00:01.727) 0:10:49.428 ****** 2026-01-10 14:38:45.109819 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109823 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:38:45.109826 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109830 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:38:45.109834 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109838 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:38:45.109841 | orchestrator | 2026-01-10 14:38:45.109845 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:38:45.109849 | orchestrator | Saturday 10 January 2026 14:37:53 +0000 (0:00:04.912) 0:10:54.341 ****** 2026-01-10 14:38:45.109856 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109860 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.109864 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109867 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.109871 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:38:45.109875 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:38:45.109878 | orchestrator | 2026-01-10 14:38:45.109882 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:38:45.109886 | orchestrator | Saturday 10 January 2026 14:37:55 +0000 (0:00:02.569) 0:10:56.910 ****** 2026-01-10 14:38:45.109889 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:38:45.109893 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.109897 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:38:45.109901 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.109904 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:38:45.109908 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.109912 | orchestrator | 2026-01-10 14:38:45.109916 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-10 14:38:45.109919 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:01.305) 0:10:58.216 ****** 2026-01-10 14:38:45.109923 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-10 14:38:45.109927 | orchestrator | 2026-01-10 14:38:45.109930 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-10 14:38:45.109934 | orchestrator | Saturday 10 January 2026 14:37:57 +0000 (0:00:00.293) 0:10:58.509 ****** 2026-01-10 14:38:45.109938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109957 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109961 | orchestrator | 2026-01-10 14:38:45.109964 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-10 14:38:45.109968 | orchestrator | Saturday 10 January 2026 14:37:58 +0000 (0:00:01.248) 0:10:59.758 ****** 2026-01-10 14:38:45.109972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:38:45.109993 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.109997 | orchestrator | 2026-01-10 14:38:45.110004 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-10 14:38:45.110061 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:00.621) 0:11:00.379 ****** 2026-01-10 14:38:45.110068 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:38:45.110071 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:38:45.110075 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:38:45.110079 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:38:45.110083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:38:45.110087 | orchestrator | 2026-01-10 14:38:45.110090 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-10 14:38:45.110094 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:32.001) 0:11:32.380 ****** 2026-01-10 14:38:45.110098 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.110102 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.110105 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.110109 | orchestrator | 2026-01-10 14:38:45.110113 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-10 14:38:45.110117 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:00.347) 0:11:32.727 ****** 2026-01-10 14:38:45.110120 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.110124 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.110128 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.110131 | orchestrator | 2026-01-10 14:38:45.110135 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-10 14:38:45.110139 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:00.323) 0:11:33.051 ****** 2026-01-10 14:38:45.110142 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.110146 | orchestrator | 2026-01-10 14:38:45.110150 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-10 14:38:45.110154 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:00.878) 0:11:33.930 ****** 2026-01-10 14:38:45.110157 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.110161 | orchestrator | 2026-01-10 14:38:45.110165 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-10 14:38:45.110169 | orchestrator | Saturday 10 January 2026 14:38:33 +0000 (0:00:00.542) 0:11:34.473 ****** 2026-01-10 14:38:45.110172 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.110176 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.110180 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.110184 | orchestrator | 2026-01-10 14:38:45.110187 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-10 14:38:45.110191 | orchestrator | Saturday 10 January 2026 14:38:34 +0000 (0:00:01.387) 0:11:35.860 ****** 2026-01-10 14:38:45.110195 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.110199 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.110202 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.110206 | orchestrator | 2026-01-10 14:38:45.110210 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-10 14:38:45.110213 | orchestrator | Saturday 10 January 2026 14:38:36 +0000 (0:00:01.614) 0:11:37.475 ****** 2026-01-10 14:38:45.110217 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:38:45.110221 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:38:45.110231 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:38:45.110234 | orchestrator | 2026-01-10 14:38:45.110238 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-10 14:38:45.110242 | orchestrator | Saturday 10 January 2026 14:38:38 +0000 (0:00:02.028) 0:11:39.504 ****** 2026-01-10 14:38:45.110246 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.110250 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.110254 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:38:45.110260 | orchestrator | 2026-01-10 14:38:45.110265 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:38:45.110271 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:03.057) 0:11:42.561 ****** 2026-01-10 14:38:45.110277 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.110284 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.110290 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.110296 | orchestrator | 2026-01-10 14:38:45.110302 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:38:45.110308 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:00.368) 0:11:42.930 ****** 2026-01-10 14:38:45.110320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:38:45.110324 | orchestrator | 2026-01-10 14:38:45.110327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:38:45.110335 | orchestrator | Saturday 10 January 2026 14:38:42 +0000 (0:00:00.540) 0:11:43.470 ****** 2026-01-10 14:38:45.110341 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.110347 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.110352 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.110357 | orchestrator | 2026-01-10 14:38:45.110365 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:38:45.110372 | orchestrator | Saturday 10 January 2026 14:38:42 +0000 (0:00:00.650) 0:11:44.121 ****** 2026-01-10 14:38:45.110379 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.110384 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:38:45.110390 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:38:45.110396 | orchestrator | 2026-01-10 14:38:45.110402 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:38:45.110408 | orchestrator | Saturday 10 January 2026 14:38:43 +0000 (0:00:00.338) 0:11:44.459 ****** 2026-01-10 14:38:45.110413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:38:45.110418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:38:45.110424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:38:45.110429 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:38:45.110435 | orchestrator | 2026-01-10 14:38:45.110440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:38:45.110446 | orchestrator | Saturday 10 January 2026 14:38:43 +0000 (0:00:00.600) 0:11:45.060 ****** 2026-01-10 14:38:45.110452 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:38:45.110457 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:38:45.110463 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:38:45.110469 | orchestrator | 2026-01-10 14:38:45.110475 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:38:45.110481 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-10 14:38:45.110488 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-10 14:38:45.110500 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-10 14:38:45.110507 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-10 14:38:45.110512 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-10 14:38:45.110520 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-10 14:38:45.110524 | orchestrator | 2026-01-10 14:38:45.110528 | orchestrator | 2026-01-10 14:38:45.110532 | orchestrator | 2026-01-10 14:38:45.110536 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:38:45.110539 | orchestrator | Saturday 10 January 2026 14:38:44 +0000 (0:00:00.247) 0:11:45.307 ****** 2026-01-10 14:38:45.110543 | orchestrator | =============================================================================== 2026-01-10 14:38:45.110547 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 66.71s 2026-01-10 14:38:45.110551 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.16s 2026-01-10 14:38:45.110554 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.00s 2026-01-10 14:38:45.110558 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.11s 2026-01-10 14:38:45.110561 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.75s 2026-01-10 14:38:45.110565 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.62s 2026-01-10 14:38:45.110569 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.63s 2026-01-10 14:38:45.110572 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.78s 2026-01-10 14:38:45.110576 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.46s 2026-01-10 14:38:45.110580 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.32s 2026-01-10 14:38:45.110583 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.59s 2026-01-10 14:38:45.110587 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.12s 2026-01-10 14:38:45.110590 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.69s 2026-01-10 14:38:45.110594 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.91s 2026-01-10 14:38:45.110598 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.25s 2026-01-10 14:38:45.110601 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.16s 2026-01-10 14:38:45.110605 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.15s 2026-01-10 14:38:45.110609 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.82s 2026-01-10 14:38:45.110612 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.59s 2026-01-10 14:38:45.110616 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.38s 2026-01-10 14:38:45.110627 | orchestrator | 2026-01-10 14:38:45 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:45.110631 | orchestrator | 2026-01-10 14:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:48.149611 | orchestrator | 2026-01-10 14:38:48 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:48.150470 | orchestrator | 2026-01-10 14:38:48 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:38:48.151451 | orchestrator | 2026-01-10 14:38:48 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state STARTED 2026-01-10 14:38:48.151581 | orchestrator | 2026-01-10 14:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:51.198577 | orchestrator | 2026-01-10 14:38:51 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:51.199327 | orchestrator | 2026-01-10 14:38:51 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:38:51.200684 | orchestrator | 2026-01-10 14:38:51 | INFO  | Task 1dd86352-9a7c-4b94-821d-1148a0517a1f is in state SUCCESS 2026-01-10 14:38:51.200836 | orchestrator | 2026-01-10 14:38:51.202287 | orchestrator | 2026-01-10 14:38:51.202339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:38:51.202352 | orchestrator | 2026-01-10 14:38:51.202361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:38:51.202369 | orchestrator | Saturday 10 January 2026 14:35:56 +0000 (0:00:00.263) 0:00:00.263 ****** 2026-01-10 14:38:51.202378 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:51.202386 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:38:51.202394 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:38:51.202404 | orchestrator | 2026-01-10 14:38:51.202418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:38:51.202431 | orchestrator | Saturday 10 January 2026 14:35:57 +0000 (0:00:00.352) 0:00:00.616 ****** 2026-01-10 14:38:51.202443 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-10 14:38:51.202456 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-10 14:38:51.202470 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-10 14:38:51.202484 | orchestrator | 2026-01-10 14:38:51.202519 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-10 14:38:51.202535 | orchestrator | 2026-01-10 14:38:51.202547 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:38:51.202560 | orchestrator | Saturday 10 January 2026 14:35:57 +0000 (0:00:00.530) 0:00:01.147 ****** 2026-01-10 14:38:51.202574 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:51.202587 | orchestrator | 2026-01-10 14:38:51.202601 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-10 14:38:51.202610 | orchestrator | Saturday 10 January 2026 14:35:58 +0000 (0:00:00.543) 0:00:01.690 ****** 2026-01-10 14:38:51.202618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:38:51.202626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:38:51.202634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:38:51.202642 | orchestrator | 2026-01-10 14:38:51.202670 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-10 14:38:51.202680 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:00.718) 0:00:02.408 ****** 2026-01-10 14:38:51.202691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202805 | orchestrator | 2026-01-10 14:38:51.202813 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:38:51.202821 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:01.993) 0:00:04.402 ****** 2026-01-10 14:38:51.202829 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:51.202837 | orchestrator | 2026-01-10 14:38:51.202844 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-10 14:38:51.202852 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.602) 0:00:05.004 ****** 2026-01-10 14:38:51.202867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.202899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.202943 | orchestrator | 2026-01-10 14:38:51.202952 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-10 14:38:51.202961 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:03.175) 0:00:08.180 ****** 2026-01-10 14:38:51.202971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.202989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.202999 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:51.203009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.203026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.203036 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:51.203046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.203061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.203071 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:51.203080 | orchestrator | 2026-01-10 14:38:51.203088 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-10 14:38:51.203099 | orchestrator | Saturday 10 January 2026 14:36:06 +0000 (0:00:01.783) 0:00:09.964 ****** 2026-01-10 14:38:51.203108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.203123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.203132 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:51.203142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.203158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.203167 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:51.203180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-10 14:38:51.203196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-10 14:38:51.203206 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:51.203215 | orchestrator | 2026-01-10 14:38:51.203224 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-10 14:38:51.203232 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:01.094) 0:00:11.059 ****** 2026-01-10 14:38:51.203242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203325 | orchestrator | 2026-01-10 14:38:51.203334 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-10 14:38:51.203343 | orchestrator | Saturday 10 January 2026 14:36:10 +0000 (0:00:02.426) 0:00:13.485 ****** 2026-01-10 14:38:51.203352 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:51.203361 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.203369 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:51.203378 | orchestrator | 2026-01-10 14:38:51.203386 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-10 14:38:51.203395 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:03.128) 0:00:16.614 ****** 2026-01-10 14:38:51.203404 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:51.203412 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.203421 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:51.203430 | orchestrator | 2026-01-10 14:38:51.203438 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-10 14:38:51.203450 | orchestrator | Saturday 10 January 2026 14:36:14 +0000 (0:00:01.622) 0:00:18.237 ****** 2026-01-10 14:38:51.203460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-10 14:38:51.203499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-10 14:38:51.203543 | orchestrator | 2026-01-10 14:38:51.203552 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:38:51.203561 | orchestrator | Saturday 10 January 2026 14:36:16 +0000 (0:00:02.108) 0:00:20.345 ****** 2026-01-10 14:38:51.203570 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:51.203579 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:38:51.203587 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:38:51.203596 | orchestrator | 2026-01-10 14:38:51.203604 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:38:51.203613 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.281) 0:00:20.627 ****** 2026-01-10 14:38:51.203622 | orchestrator | 2026-01-10 14:38:51.203631 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:38:51.203643 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.078) 0:00:20.705 ****** 2026-01-10 14:38:51.203679 | orchestrator | 2026-01-10 14:38:51.203694 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:38:51.203710 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.061) 0:00:20.767 ****** 2026-01-10 14:38:51.203732 | orchestrator | 2026-01-10 14:38:51.203748 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-10 14:38:51.203762 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.061) 0:00:20.829 ****** 2026-01-10 14:38:51.203776 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:51.203791 | orchestrator | 2026-01-10 14:38:51.203806 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-10 14:38:51.203821 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.179) 0:00:21.009 ****** 2026-01-10 14:38:51.203836 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:38:51.203851 | orchestrator | 2026-01-10 14:38:51.203866 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-10 14:38:51.203881 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.549) 0:00:21.558 ****** 2026-01-10 14:38:51.203897 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.203912 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:51.203928 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:51.203942 | orchestrator | 2026-01-10 14:38:51.203957 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-10 14:38:51.203972 | orchestrator | Saturday 10 January 2026 14:37:18 +0000 (0:01:00.215) 0:01:21.773 ****** 2026-01-10 14:38:51.203988 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.204003 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:38:51.204018 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:38:51.204032 | orchestrator | 2026-01-10 14:38:51.204046 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:38:51.204061 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:01:14.532) 0:02:36.306 ****** 2026-01-10 14:38:51.204075 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:38:51.204091 | orchestrator | 2026-01-10 14:38:51.204100 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-10 14:38:51.204109 | orchestrator | Saturday 10 January 2026 14:38:33 +0000 (0:00:00.761) 0:02:37.068 ****** 2026-01-10 14:38:51.204118 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:51.204127 | orchestrator | 2026-01-10 14:38:51.204135 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-01-10 14:38:51.204144 | orchestrator | Saturday 10 January 2026 14:38:36 +0000 (0:00:02.870) 0:02:39.939 ****** 2026-01-10 14:38:51.204153 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:51.204161 | orchestrator | 2026-01-10 14:38:51.204171 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-10 14:38:51.204194 | orchestrator | Saturday 10 January 2026 14:38:39 +0000 (0:00:02.619) 0:02:42.558 ****** 2026-01-10 14:38:51.204224 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:38:51.204239 | orchestrator | 2026-01-10 14:38:51.204254 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-10 14:38:51.204268 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:02.794) 0:02:45.353 ****** 2026-01-10 14:38:51.204281 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.204295 | orchestrator | 2026-01-10 14:38:51.204308 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-10 14:38:51.204323 | orchestrator | Saturday 10 January 2026 14:38:45 +0000 (0:00:03.335) 0:02:48.688 ****** 2026-01-10 14:38:51.204338 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:38:51.204353 | orchestrator | 2026-01-10 14:38:51.204368 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:38:51.204384 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:38:51.204400 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:38:51.204426 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:38:51.204442 | orchestrator | 2026-01-10 14:38:51.204456 | orchestrator | 2026-01-10 14:38:51.204471 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:38:51.204485 | orchestrator | Saturday 10 January 2026 14:38:48 +0000 (0:00:03.006) 0:02:51.695 ****** 2026-01-10 14:38:51.204501 | orchestrator | =============================================================================== 2026-01-10 14:38:51.204515 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 74.53s 2026-01-10 14:38:51.204530 | orchestrator | opensearch : Restart opensearch container ------------------------------ 60.22s 2026-01-10 14:38:51.204545 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.34s 2026-01-10 14:38:51.204561 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.18s 2026-01-10 14:38:51.204575 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.13s 2026-01-10 14:38:51.204588 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.01s 2026-01-10 14:38:51.204603 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.87s 2026-01-10 14:38:51.204617 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.79s 2026-01-10 14:38:51.204630 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.62s 2026-01-10 14:38:51.204646 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.43s 2026-01-10 14:38:51.204704 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2026-01-10 14:38:51.204720 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.99s 2026-01-10 14:38:51.204734 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.78s 2026-01-10 14:38:51.204748 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.62s 2026-01-10 14:38:51.204762 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2026-01-10 14:38:51.204776 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2026-01-10 14:38:51.204791 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2026-01-10 14:38:51.204805 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-01-10 14:38:51.204820 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.55s 2026-01-10 14:38:51.204836 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-01-10 14:38:51.204852 | orchestrator | 2026-01-10 14:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:54.242366 | orchestrator | 2026-01-10 14:38:54 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:54.244478 | orchestrator | 2026-01-10 14:38:54 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:38:54.244609 | orchestrator | 2026-01-10 14:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:57.287948 | orchestrator | 2026-01-10 14:38:57 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:38:57.289766 | orchestrator | 2026-01-10 14:38:57 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:38:57.289870 | orchestrator | 2026-01-10 14:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:00.337221 | orchestrator | 2026-01-10 14:39:00 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state STARTED 2026-01-10 14:39:00.337945 | orchestrator | 2026-01-10 14:39:00 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:00.338328 | orchestrator | 2026-01-10 14:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:03.385200 | orchestrator | 2026-01-10 14:39:03.385407 | orchestrator | 2026-01-10 14:39:03 | INFO  | Task a93b62f2-1386-407a-abbe-b4d3ffebb7ac is in state SUCCESS 2026-01-10 14:39:03.386315 | orchestrator | 2026-01-10 14:39:03.386356 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-10 14:39:03.386364 | orchestrator | 2026-01-10 14:39:03.386370 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:39:03.386376 | orchestrator | Saturday 10 January 2026 14:35:56 +0000 (0:00:00.097) 0:00:00.097 ****** 2026-01-10 14:39:03.386382 | orchestrator | ok: [localhost] => { 2026-01-10 14:39:03.386390 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-10 14:39:03.386396 | orchestrator | } 2026-01-10 14:39:03.386403 | orchestrator | 2026-01-10 14:39:03.386409 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-10 14:39:03.386415 | orchestrator | Saturday 10 January 2026 14:35:56 +0000 (0:00:00.059) 0:00:00.157 ****** 2026-01-10 14:39:03.386421 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-10 14:39:03.386429 | orchestrator | ...ignoring 2026-01-10 14:39:03.386435 | orchestrator | 2026-01-10 14:39:03.386442 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-10 14:39:03.386448 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:02.981) 0:00:03.139 ****** 2026-01-10 14:39:03.386454 | orchestrator | skipping: [localhost] 2026-01-10 14:39:03.386459 | orchestrator | 2026-01-10 14:39:03.386465 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-10 14:39:03.386471 | orchestrator | Saturday 10 January 2026 14:35:59 +0000 (0:00:00.064) 0:00:03.204 ****** 2026-01-10 14:39:03.386477 | orchestrator | ok: [localhost] 2026-01-10 14:39:03.386483 | orchestrator | 2026-01-10 14:39:03.386489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:39:03.386494 | orchestrator | 2026-01-10 14:39:03.386500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:39:03.386506 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:00.200) 0:00:03.404 ****** 2026-01-10 14:39:03.386512 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.386518 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.386524 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.386530 | orchestrator | 2026-01-10 14:39:03.386535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:39:03.386541 | orchestrator | Saturday 10 January 2026 14:36:00 +0000 (0:00:00.344) 0:00:03.748 ****** 2026-01-10 14:39:03.386567 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 14:39:03.386574 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 14:39:03.386580 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 14:39:03.386585 | orchestrator | 2026-01-10 14:39:03.386591 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 14:39:03.386597 | orchestrator | 2026-01-10 14:39:03.386603 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 14:39:03.386609 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.664) 0:00:04.413 ****** 2026-01-10 14:39:03.386615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:39:03.386624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:39:03.386633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:39:03.386688 | orchestrator | 2026-01-10 14:39:03.386698 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:39:03.386708 | orchestrator | Saturday 10 January 2026 14:36:01 +0000 (0:00:00.418) 0:00:04.832 ****** 2026-01-10 14:39:03.386718 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:03.386729 | orchestrator | 2026-01-10 14:39:03.386738 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-10 14:39:03.386748 | orchestrator | Saturday 10 January 2026 14:36:02 +0000 (0:00:00.839) 0:00:05.671 ****** 2026-01-10 14:39:03.386791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.386830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.386848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.386854 | orchestrator | 2026-01-10 14:39:03.386869 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-10 14:39:03.386875 | orchestrator | Saturday 10 January 2026 14:36:06 +0000 (0:00:04.088) 0:00:09.760 ****** 2026-01-10 14:39:03.386882 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.386893 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.386903 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.386912 | orchestrator | 2026-01-10 14:39:03.386922 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-10 14:39:03.386932 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:00.828) 0:00:10.589 ****** 2026-01-10 14:39:03.386941 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.386951 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.386961 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.386970 | orchestrator | 2026-01-10 14:39:03.386980 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-10 14:39:03.386989 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:01.458) 0:00:12.047 ****** 2026-01-10 14:39:03.387007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387058 | orchestrator | 2026-01-10 14:39:03.387068 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-10 14:39:03.387078 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:04.303) 0:00:16.351 ****** 2026-01-10 14:39:03.387087 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.387097 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.387106 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.387115 | orchestrator | 2026-01-10 14:39:03.387125 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-10 14:39:03.387135 | orchestrator | Saturday 10 January 2026 14:36:14 +0000 (0:00:01.152) 0:00:17.503 ****** 2026-01-10 14:39:03.387144 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.387153 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:03.387162 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:03.387172 | orchestrator | 2026-01-10 14:39:03.387182 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:39:03.387192 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:03.747) 0:00:21.251 ****** 2026-01-10 14:39:03.387202 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:03.387211 | orchestrator | 2026-01-10 14:39:03.387221 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:39:03.387231 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:00.498) 0:00:21.750 ****** 2026-01-10 14:39:03.387256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387274 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.387285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387295 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.387317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387334 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.387344 | orchestrator | 2026-01-10 14:39:03.387354 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:39:03.387364 | orchestrator | Saturday 10 January 2026 14:36:21 +0000 (0:00:03.200) 0:00:24.950 ****** 2026-01-10 14:39:03.387375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387385 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.387406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387423 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.387473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387485 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.387493 | orchestrator | 2026-01-10 14:39:03.387502 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:39:03.387511 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:03.035) 0:00:27.986 ****** 2026-01-10 14:39:03.387520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387542 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.387566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387577 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.387587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:03.387604 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.387614 | orchestrator | 2026-01-10 14:39:03.387623 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-10 14:39:03.387632 | orchestrator | Saturday 10 January 2026 14:36:27 +0000 (0:00:02.823) 0:00:30.810 ****** 2026-01-10 14:39:03.387864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:39:03.387936 | orchestrator | 2026-01-10 14:39:03.387946 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-10 14:39:03.387956 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:03.988) 0:00:34.798 ****** 2026-01-10 14:39:03.387965 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.387975 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:03.387984 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:03.387993 | orchestrator | 2026-01-10 14:39:03.388002 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-10 14:39:03.388010 | orchestrator | Saturday 10 January 2026 14:36:32 +0000 (0:00:00.879) 0:00:35.677 ****** 2026-01-10 14:39:03.388021 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388030 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.388039 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.388048 | orchestrator | 2026-01-10 14:39:03.388057 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-10 14:39:03.388066 | orchestrator | Saturday 10 January 2026 14:36:33 +0000 (0:00:00.678) 0:00:36.356 ****** 2026-01-10 14:39:03.388076 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388086 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.388097 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.388106 | orchestrator | 2026-01-10 14:39:03.388115 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-10 14:39:03.388124 | orchestrator | Saturday 10 January 2026 14:36:33 +0000 (0:00:00.420) 0:00:36.776 ****** 2026-01-10 14:39:03.388133 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-10 14:39:03.388144 | orchestrator | ...ignoring 2026-01-10 14:39:03.388155 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-10 14:39:03.388164 | orchestrator | ...ignoring 2026-01-10 14:39:03.388172 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-10 14:39:03.388190 | orchestrator | ...ignoring 2026-01-10 14:39:03.388199 | orchestrator | 2026-01-10 14:39:03.388208 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-10 14:39:03.388217 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:10.930) 0:00:47.706 ****** 2026-01-10 14:39:03.388225 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388233 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.388242 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.388251 | orchestrator | 2026-01-10 14:39:03.388260 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-10 14:39:03.388270 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:00.445) 0:00:48.152 ****** 2026-01-10 14:39:03.388279 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388289 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388298 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388308 | orchestrator | 2026-01-10 14:39:03.388318 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-10 14:39:03.388327 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:00.808) 0:00:48.960 ****** 2026-01-10 14:39:03.388337 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388346 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388355 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388365 | orchestrator | 2026-01-10 14:39:03.388375 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-10 14:39:03.388385 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.550) 0:00:49.510 ****** 2026-01-10 14:39:03.388394 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388403 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388413 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388423 | orchestrator | 2026-01-10 14:39:03.388432 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-10 14:39:03.388440 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.412) 0:00:49.922 ****** 2026-01-10 14:39:03.388450 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388459 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.388469 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.388479 | orchestrator | 2026-01-10 14:39:03.388489 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-10 14:39:03.388507 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.461) 0:00:50.384 ****** 2026-01-10 14:39:03.388527 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388538 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388548 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388558 | orchestrator | 2026-01-10 14:39:03.388567 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:39:03.388577 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.748) 0:00:51.132 ****** 2026-01-10 14:39:03.388587 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388597 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388606 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-10 14:39:03.388616 | orchestrator | 2026-01-10 14:39:03.388625 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-10 14:39:03.388666 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:00.398) 0:00:51.531 ****** 2026-01-10 14:39:03.388677 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.388715 | orchestrator | 2026-01-10 14:39:03.388726 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-10 14:39:03.388736 | orchestrator | Saturday 10 January 2026 14:36:58 +0000 (0:00:10.601) 0:01:02.132 ****** 2026-01-10 14:39:03.388747 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388757 | orchestrator | 2026-01-10 14:39:03.388768 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:39:03.388790 | orchestrator | Saturday 10 January 2026 14:36:59 +0000 (0:00:00.134) 0:01:02.266 ****** 2026-01-10 14:39:03.388801 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388811 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388820 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388830 | orchestrator | 2026-01-10 14:39:03.388840 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-10 14:39:03.388849 | orchestrator | Saturday 10 January 2026 14:37:00 +0000 (0:00:01.137) 0:01:03.404 ****** 2026-01-10 14:39:03.388860 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.388870 | orchestrator | 2026-01-10 14:39:03.388880 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-10 14:39:03.388890 | orchestrator | Saturday 10 January 2026 14:37:08 +0000 (0:00:08.082) 0:01:11.486 ****** 2026-01-10 14:39:03.388899 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388908 | orchestrator | 2026-01-10 14:39:03.388917 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-10 14:39:03.388928 | orchestrator | Saturday 10 January 2026 14:37:09 +0000 (0:00:01.629) 0:01:13.116 ****** 2026-01-10 14:39:03.388934 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.388940 | orchestrator | 2026-01-10 14:39:03.388946 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-10 14:39:03.388951 | orchestrator | Saturday 10 January 2026 14:37:12 +0000 (0:00:02.733) 0:01:15.849 ****** 2026-01-10 14:39:03.388957 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.388963 | orchestrator | 2026-01-10 14:39:03.388969 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-10 14:39:03.388975 | orchestrator | Saturday 10 January 2026 14:37:12 +0000 (0:00:00.144) 0:01:15.994 ****** 2026-01-10 14:39:03.388980 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.388986 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.388992 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.388997 | orchestrator | 2026-01-10 14:39:03.389003 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-10 14:39:03.389009 | orchestrator | Saturday 10 January 2026 14:37:13 +0000 (0:00:00.369) 0:01:16.363 ****** 2026-01-10 14:39:03.389014 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.389020 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 14:39:03.389030 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:03.389039 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:03.389048 | orchestrator | 2026-01-10 14:39:03.389058 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 14:39:03.389067 | orchestrator | skipping: no hosts matched 2026-01-10 14:39:03.389076 | orchestrator | 2026-01-10 14:39:03.389085 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:39:03.389094 | orchestrator | 2026-01-10 14:39:03.389103 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:39:03.389113 | orchestrator | Saturday 10 January 2026 14:37:13 +0000 (0:00:00.588) 0:01:16.951 ****** 2026-01-10 14:39:03.389121 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:03.389130 | orchestrator | 2026-01-10 14:39:03.389140 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:39:03.389149 | orchestrator | Saturday 10 January 2026 14:37:31 +0000 (0:00:18.073) 0:01:35.025 ****** 2026-01-10 14:39:03.389158 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.389168 | orchestrator | 2026-01-10 14:39:03.389178 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:39:03.389188 | orchestrator | Saturday 10 January 2026 14:37:47 +0000 (0:00:15.649) 0:01:50.675 ****** 2026-01-10 14:39:03.389197 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.389206 | orchestrator | 2026-01-10 14:39:03.389217 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:39:03.389227 | orchestrator | 2026-01-10 14:39:03.389236 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:39:03.389255 | orchestrator | Saturday 10 January 2026 14:37:50 +0000 (0:00:02.608) 0:01:53.284 ****** 2026-01-10 14:39:03.389264 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:03.389274 | orchestrator | 2026-01-10 14:39:03.389284 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:39:03.389292 | orchestrator | Saturday 10 January 2026 14:38:14 +0000 (0:00:24.279) 0:02:17.563 ****** 2026-01-10 14:39:03.389301 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.389311 | orchestrator | 2026-01-10 14:39:03.389320 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:39:03.389329 | orchestrator | Saturday 10 January 2026 14:38:24 +0000 (0:00:10.596) 0:02:28.160 ****** 2026-01-10 14:39:03.389339 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.389347 | orchestrator | 2026-01-10 14:39:03.389364 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 14:39:03.389374 | orchestrator | 2026-01-10 14:39:03.389395 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:39:03.389406 | orchestrator | Saturday 10 January 2026 14:38:27 +0000 (0:00:02.705) 0:02:30.866 ****** 2026-01-10 14:39:03.389416 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.389426 | orchestrator | 2026-01-10 14:39:03.389435 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:39:03.389445 | orchestrator | Saturday 10 January 2026 14:38:40 +0000 (0:00:12.641) 0:02:43.507 ****** 2026-01-10 14:39:03.389454 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.389464 | orchestrator | 2026-01-10 14:39:03.389473 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:39:03.389483 | orchestrator | Saturday 10 January 2026 14:38:45 +0000 (0:00:04.731) 0:02:48.239 ****** 2026-01-10 14:39:03.389492 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.389502 | orchestrator | 2026-01-10 14:39:03.389511 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 14:39:03.389520 | orchestrator | 2026-01-10 14:39:03.389529 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 14:39:03.389540 | orchestrator | Saturday 10 January 2026 14:38:47 +0000 (0:00:02.878) 0:02:51.117 ****** 2026-01-10 14:39:03.389550 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:03.389559 | orchestrator | 2026-01-10 14:39:03.389570 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-10 14:39:03.389580 | orchestrator | Saturday 10 January 2026 14:38:48 +0000 (0:00:00.581) 0:02:51.698 ****** 2026-01-10 14:39:03.389589 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.389598 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.389607 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.389615 | orchestrator | 2026-01-10 14:39:03.389624 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-10 14:39:03.389634 | orchestrator | Saturday 10 January 2026 14:38:50 +0000 (0:00:02.483) 0:02:54.182 ****** 2026-01-10 14:39:03.389704 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.389715 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.389725 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.389735 | orchestrator | 2026-01-10 14:39:03.389746 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-10 14:39:03.389756 | orchestrator | Saturday 10 January 2026 14:38:53 +0000 (0:00:02.492) 0:02:56.674 ****** 2026-01-10 14:39:03.389766 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.389776 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.389786 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.389796 | orchestrator | 2026-01-10 14:39:03.389806 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-10 14:39:03.389816 | orchestrator | Saturday 10 January 2026 14:38:55 +0000 (0:00:02.543) 0:02:59.218 ****** 2026-01-10 14:39:03.389825 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.389846 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.389856 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:03.389862 | orchestrator | 2026-01-10 14:39:03.389868 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-10 14:39:03.389874 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:02.491) 0:03:01.709 ****** 2026-01-10 14:39:03.389879 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:03.389885 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:03.389891 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:03.389896 | orchestrator | 2026-01-10 14:39:03.389901 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 14:39:03.389907 | orchestrator | Saturday 10 January 2026 14:39:01 +0000 (0:00:03.303) 0:03:05.013 ****** 2026-01-10 14:39:03.389912 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:03.389918 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:03.389923 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:03.389928 | orchestrator | 2026-01-10 14:39:03.389934 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:39:03.389940 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:39:03.389947 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-10 14:39:03.389954 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-10 14:39:03.389959 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-10 14:39:03.389965 | orchestrator | 2026-01-10 14:39:03.389970 | orchestrator | 2026-01-10 14:39:03.389975 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:39:03.389981 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.255) 0:03:05.269 ****** 2026-01-10 14:39:03.389986 | orchestrator | =============================================================================== 2026-01-10 14:39:03.389992 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.35s 2026-01-10 14:39:03.389997 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.25s 2026-01-10 14:39:03.390002 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.64s 2026-01-10 14:39:03.390008 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2026-01-10 14:39:03.390040 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.60s 2026-01-10 14:39:03.390048 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.08s 2026-01-10 14:39:03.390062 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.31s 2026-01-10 14:39:03.390068 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.73s 2026-01-10 14:39:03.390111 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.30s 2026-01-10 14:39:03.390123 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.09s 2026-01-10 14:39:03.390139 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.99s 2026-01-10 14:39:03.390148 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.75s 2026-01-10 14:39:03.390156 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.30s 2026-01-10 14:39:03.390165 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.20s 2026-01-10 14:39:03.390174 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2026-01-10 14:39:03.390184 | orchestrator | Check MariaDB service --------------------------------------------------- 2.98s 2026-01-10 14:39:03.390200 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.88s 2026-01-10 14:39:03.390209 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.82s 2026-01-10 14:39:03.390218 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.73s 2026-01-10 14:39:03.390226 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.54s 2026-01-10 14:39:03.390234 | orchestrator | 2026-01-10 14:39:03 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:03.390244 | orchestrator | 2026-01-10 14:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:06.423894 | orchestrator | 2026-01-10 14:39:06 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:06.423989 | orchestrator | 2026-01-10 14:39:06 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:06.425890 | orchestrator | 2026-01-10 14:39:06 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:06.425944 | orchestrator | 2026-01-10 14:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:09.457788 | orchestrator | 2026-01-10 14:39:09 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:09.459494 | orchestrator | 2026-01-10 14:39:09 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:09.461501 | orchestrator | 2026-01-10 14:39:09 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:09.461668 | orchestrator | 2026-01-10 14:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:12.502230 | orchestrator | 2026-01-10 14:39:12 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:12.505942 | orchestrator | 2026-01-10 14:39:12 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:12.508980 | orchestrator | 2026-01-10 14:39:12 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:12.509064 | orchestrator | 2026-01-10 14:39:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:15.556943 | orchestrator | 2026-01-10 14:39:15 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:15.559244 | orchestrator | 2026-01-10 14:39:15 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:15.563664 | orchestrator | 2026-01-10 14:39:15 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:15.563815 | orchestrator | 2026-01-10 14:39:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:18.644123 | orchestrator | 2026-01-10 14:39:18 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:18.645865 | orchestrator | 2026-01-10 14:39:18 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:18.648511 | orchestrator | 2026-01-10 14:39:18 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:18.648559 | orchestrator | 2026-01-10 14:39:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:21.680841 | orchestrator | 2026-01-10 14:39:21 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:21.684096 | orchestrator | 2026-01-10 14:39:21 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:21.686009 | orchestrator | 2026-01-10 14:39:21 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:21.686118 | orchestrator | 2026-01-10 14:39:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:24.732845 | orchestrator | 2026-01-10 14:39:24 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:24.733009 | orchestrator | 2026-01-10 14:39:24 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:24.733922 | orchestrator | 2026-01-10 14:39:24 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:24.733968 | orchestrator | 2026-01-10 14:39:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:27.787578 | orchestrator | 2026-01-10 14:39:27 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:27.790664 | orchestrator | 2026-01-10 14:39:27 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:27.792521 | orchestrator | 2026-01-10 14:39:27 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:27.792701 | orchestrator | 2026-01-10 14:39:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:30.821723 | orchestrator | 2026-01-10 14:39:30 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:30.822240 | orchestrator | 2026-01-10 14:39:30 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:30.823295 | orchestrator | 2026-01-10 14:39:30 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:30.823334 | orchestrator | 2026-01-10 14:39:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:33.862910 | orchestrator | 2026-01-10 14:39:33 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:33.863011 | orchestrator | 2026-01-10 14:39:33 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:33.863577 | orchestrator | 2026-01-10 14:39:33 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:33.863599 | orchestrator | 2026-01-10 14:39:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:36.895183 | orchestrator | 2026-01-10 14:39:36 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:36.896957 | orchestrator | 2026-01-10 14:39:36 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:36.899103 | orchestrator | 2026-01-10 14:39:36 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:36.899169 | orchestrator | 2026-01-10 14:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:39.957362 | orchestrator | 2026-01-10 14:39:39 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:39.959789 | orchestrator | 2026-01-10 14:39:39 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:39.961244 | orchestrator | 2026-01-10 14:39:39 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:39.961300 | orchestrator | 2026-01-10 14:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:43.001193 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:43.003249 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:43.005763 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:43.005845 | orchestrator | 2026-01-10 14:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:46.040875 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:46.043459 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:46.046183 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:46.046252 | orchestrator | 2026-01-10 14:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:49.086868 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:49.089873 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:49.092141 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:49.092195 | orchestrator | 2026-01-10 14:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:52.135167 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:52.137873 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:52.139457 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:52.139505 | orchestrator | 2026-01-10 14:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:55.185555 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:55.186766 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:55.188150 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:55.188482 | orchestrator | 2026-01-10 14:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:58.229944 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:39:58.232036 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:39:58.235245 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:39:58.235291 | orchestrator | 2026-01-10 14:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:01.284570 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:01.286987 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:01.289141 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:01.289242 | orchestrator | 2026-01-10 14:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:04.331269 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:04.333669 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:04.335784 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:04.335862 | orchestrator | 2026-01-10 14:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:07.385220 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:07.387145 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:07.389353 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:07.389399 | orchestrator | 2026-01-10 14:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:10.436500 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:10.437137 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:10.439368 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:10.439461 | orchestrator | 2026-01-10 14:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:13.488430 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:13.490472 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:13.492594 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:13.492636 | orchestrator | 2026-01-10 14:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:16.533304 | orchestrator | 2026-01-10 14:40:16 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:16.535002 | orchestrator | 2026-01-10 14:40:16 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:16.536990 | orchestrator | 2026-01-10 14:40:16 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:16.537409 | orchestrator | 2026-01-10 14:40:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:19.590790 | orchestrator | 2026-01-10 14:40:19 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:19.592350 | orchestrator | 2026-01-10 14:40:19 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:19.594823 | orchestrator | 2026-01-10 14:40:19 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:19.594879 | orchestrator | 2026-01-10 14:40:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:22.637965 | orchestrator | 2026-01-10 14:40:22 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:22.640809 | orchestrator | 2026-01-10 14:40:22 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:22.643470 | orchestrator | 2026-01-10 14:40:22 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:22.643526 | orchestrator | 2026-01-10 14:40:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:25.689273 | orchestrator | 2026-01-10 14:40:25 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:25.692547 | orchestrator | 2026-01-10 14:40:25 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:25.695299 | orchestrator | 2026-01-10 14:40:25 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:25.695364 | orchestrator | 2026-01-10 14:40:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:28.748977 | orchestrator | 2026-01-10 14:40:28 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:28.751382 | orchestrator | 2026-01-10 14:40:28 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:28.753216 | orchestrator | 2026-01-10 14:40:28 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:28.753324 | orchestrator | 2026-01-10 14:40:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:31.806145 | orchestrator | 2026-01-10 14:40:31 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:31.807273 | orchestrator | 2026-01-10 14:40:31 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:31.808974 | orchestrator | 2026-01-10 14:40:31 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:31.809005 | orchestrator | 2026-01-10 14:40:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:34.859198 | orchestrator | 2026-01-10 14:40:34 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:34.861453 | orchestrator | 2026-01-10 14:40:34 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:34.863588 | orchestrator | 2026-01-10 14:40:34 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:34.863651 | orchestrator | 2026-01-10 14:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:37.906886 | orchestrator | 2026-01-10 14:40:37 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:37.908503 | orchestrator | 2026-01-10 14:40:37 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:37.911693 | orchestrator | 2026-01-10 14:40:37 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:37.911764 | orchestrator | 2026-01-10 14:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:40.958282 | orchestrator | 2026-01-10 14:40:40 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:40.958365 | orchestrator | 2026-01-10 14:40:40 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:40.959022 | orchestrator | 2026-01-10 14:40:40 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:40.959038 | orchestrator | 2026-01-10 14:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:44.005372 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:44.007203 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:44.009065 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:44.009126 | orchestrator | 2026-01-10 14:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:47.051918 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:47.052379 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:47.053945 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:47.054392 | orchestrator | 2026-01-10 14:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:50.101619 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:50.104007 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state STARTED 2026-01-10 14:40:50.106242 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:50.106316 | orchestrator | 2026-01-10 14:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:53.150084 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:53.150668 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task 8a8e3844-c9b2-434c-bfb9-8c4d4f2126eb is in state SUCCESS 2026-01-10 14:40:53.152429 | orchestrator | 2026-01-10 14:40:53.152474 | orchestrator | 2026-01-10 14:40:53.152484 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:40:53.152492 | orchestrator | 2026-01-10 14:40:53.152499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:40:53.152506 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.305) 0:00:00.305 ****** 2026-01-10 14:40:53.152552 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.152559 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.152565 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.152572 | orchestrator | 2026-01-10 14:40:53.152579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:40:53.152585 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.399) 0:00:00.705 ****** 2026-01-10 14:40:53.152591 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-10 14:40:53.152598 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-10 14:40:53.152605 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-10 14:40:53.152612 | orchestrator | 2026-01-10 14:40:53.152618 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-10 14:40:53.152624 | orchestrator | 2026-01-10 14:40:53.152631 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:40:53.152638 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.472) 0:00:01.177 ****** 2026-01-10 14:40:53.152655 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:53.152662 | orchestrator | 2026-01-10 14:40:53.152668 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-10 14:40:53.152674 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:00.471) 0:00:01.649 ****** 2026-01-10 14:40:53.152694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.152730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.152743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.152756 | orchestrator | 2026-01-10 14:40:53.152762 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-10 14:40:53.152769 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:01.326) 0:00:02.976 ****** 2026-01-10 14:40:53.152775 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.152782 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.152789 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.152795 | orchestrator | 2026-01-10 14:40:53.152802 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:40:53.152809 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:00.459) 0:00:03.436 ****** 2026-01-10 14:40:53.152815 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:40:53.152826 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:40:53.152833 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:40:53.152840 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:40:53.152847 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:40:53.152854 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:40:53.152861 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:40:53.152867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:40:53.152874 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:40:53.152880 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:40:53.152886 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:40:53.152892 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:40:53.152898 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:40:53.152906 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:40:53.152913 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:40:53.152919 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:40:53.152926 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:40:53.152933 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:40:53.152940 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:40:53.152947 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:40:53.152953 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:40:53.152960 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:40:53.152967 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:40:53.152979 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:40:53.152989 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-10 14:40:53.152997 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-10 14:40:53.153006 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-10 14:40:53.153015 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-10 14:40:53.153023 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-10 14:40:53.153030 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-10 14:40:53.153045 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-10 14:40:53.153053 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-10 14:40:53.153062 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-10 14:40:53.153071 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-10 14:40:53.153085 | orchestrator | 2026-01-10 14:40:53.153093 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153101 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:00.727) 0:00:04.164 ****** 2026-01-10 14:40:53.153108 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153117 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153124 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153133 | orchestrator | 2026-01-10 14:40:53.153140 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153147 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:00.335) 0:00:04.499 ****** 2026-01-10 14:40:53.153155 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153163 | orchestrator | 2026-01-10 14:40:53.153176 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153184 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:00.138) 0:00:04.638 ****** 2026-01-10 14:40:53.153193 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153201 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153209 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153216 | orchestrator | 2026-01-10 14:40:53.153224 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153233 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:00.555) 0:00:05.193 ****** 2026-01-10 14:40:53.153241 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153249 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153257 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153265 | orchestrator | 2026-01-10 14:40:53.153273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153280 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.337) 0:00:05.531 ****** 2026-01-10 14:40:53.153287 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153294 | orchestrator | 2026-01-10 14:40:53.153305 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153312 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.148) 0:00:05.679 ****** 2026-01-10 14:40:53.153319 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153326 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153333 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153339 | orchestrator | 2026-01-10 14:40:53.153345 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153352 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.299) 0:00:05.978 ****** 2026-01-10 14:40:53.153358 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153364 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153370 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153377 | orchestrator | 2026-01-10 14:40:53.153383 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153389 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.358) 0:00:06.336 ****** 2026-01-10 14:40:53.153395 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153402 | orchestrator | 2026-01-10 14:40:53.153408 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153414 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.156) 0:00:06.493 ****** 2026-01-10 14:40:53.153421 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153433 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153438 | orchestrator | 2026-01-10 14:40:53.153445 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153451 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.565) 0:00:07.058 ****** 2026-01-10 14:40:53.153458 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153464 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153470 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153476 | orchestrator | 2026-01-10 14:40:53.153482 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153488 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.306) 0:00:07.365 ****** 2026-01-10 14:40:53.153495 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153501 | orchestrator | 2026-01-10 14:40:53.153507 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153513 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.122) 0:00:07.488 ****** 2026-01-10 14:40:53.153519 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153525 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153597 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153604 | orchestrator | 2026-01-10 14:40:53.153610 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153617 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.288) 0:00:07.776 ****** 2026-01-10 14:40:53.153623 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153630 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153636 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153643 | orchestrator | 2026-01-10 14:40:53.153649 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153655 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.482) 0:00:08.259 ****** 2026-01-10 14:40:53.153661 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153667 | orchestrator | 2026-01-10 14:40:53.153677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153683 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.133) 0:00:08.393 ****** 2026-01-10 14:40:53.153690 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153696 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153703 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153709 | orchestrator | 2026-01-10 14:40:53.153715 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153728 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.314) 0:00:08.707 ****** 2026-01-10 14:40:53.153735 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153741 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153748 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153754 | orchestrator | 2026-01-10 14:40:53.153761 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153767 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.320) 0:00:09.027 ****** 2026-01-10 14:40:53.153774 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153780 | orchestrator | 2026-01-10 14:40:53.153787 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153793 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.128) 0:00:09.156 ****** 2026-01-10 14:40:53.153800 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153806 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153813 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153819 | orchestrator | 2026-01-10 14:40:53.153825 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153838 | orchestrator | Saturday 10 January 2026 14:39:16 +0000 (0:00:00.289) 0:00:09.446 ****** 2026-01-10 14:40:53.153845 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153852 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153858 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153864 | orchestrator | 2026-01-10 14:40:53.153871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153877 | orchestrator | Saturday 10 January 2026 14:39:16 +0000 (0:00:00.710) 0:00:10.157 ****** 2026-01-10 14:40:53.153883 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153890 | orchestrator | 2026-01-10 14:40:53.153895 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.153901 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:00.194) 0:00:10.352 ****** 2026-01-10 14:40:53.153907 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153913 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.153920 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.153926 | orchestrator | 2026-01-10 14:40:53.153932 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.153939 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:00.304) 0:00:10.657 ****** 2026-01-10 14:40:53.153945 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.153951 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.153957 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.153963 | orchestrator | 2026-01-10 14:40:53.153969 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.153975 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:00.412) 0:00:11.069 ****** 2026-01-10 14:40:53.153981 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.153988 | orchestrator | 2026-01-10 14:40:53.153994 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.154000 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:00.126) 0:00:11.195 ****** 2026-01-10 14:40:53.154007 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154013 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154051 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154058 | orchestrator | 2026-01-10 14:40:53.154064 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.154071 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:00.283) 0:00:11.479 ****** 2026-01-10 14:40:53.154077 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.154083 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.154089 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.154096 | orchestrator | 2026-01-10 14:40:53.154102 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.154115 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:00.692) 0:00:12.172 ****** 2026-01-10 14:40:53.154123 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154129 | orchestrator | 2026-01-10 14:40:53.154136 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.154143 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.123) 0:00:12.296 ****** 2026-01-10 14:40:53.154149 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154155 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154168 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154174 | orchestrator | 2026-01-10 14:40:53.154179 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:40:53.154185 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.280) 0:00:12.577 ****** 2026-01-10 14:40:53.154191 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:40:53.154197 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:40:53.154204 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:40:53.154209 | orchestrator | 2026-01-10 14:40:53.154216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:40:53.154222 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.323) 0:00:12.900 ****** 2026-01-10 14:40:53.154228 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154234 | orchestrator | 2026-01-10 14:40:53.154241 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:40:53.154247 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.124) 0:00:13.025 ****** 2026-01-10 14:40:53.154254 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154260 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154267 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154274 | orchestrator | 2026-01-10 14:40:53.154280 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-10 14:40:53.154287 | orchestrator | Saturday 10 January 2026 14:39:20 +0000 (0:00:00.477) 0:00:13.503 ****** 2026-01-10 14:40:53.154298 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:53.154306 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:53.154312 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:53.154318 | orchestrator | 2026-01-10 14:40:53.154325 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-10 14:40:53.154332 | orchestrator | Saturday 10 January 2026 14:39:22 +0000 (0:00:01.700) 0:00:15.204 ****** 2026-01-10 14:40:53.154338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:40:53.154345 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:40:53.154351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:40:53.154358 | orchestrator | 2026-01-10 14:40:53.154365 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-10 14:40:53.154372 | orchestrator | Saturday 10 January 2026 14:39:23 +0000 (0:00:01.865) 0:00:17.069 ****** 2026-01-10 14:40:53.154378 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:40:53.154386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:40:53.154392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:40:53.154398 | orchestrator | 2026-01-10 14:40:53.154404 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-10 14:40:53.154418 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:02.101) 0:00:19.171 ****** 2026-01-10 14:40:53.154425 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:40:53.154431 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:40:53.154445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:40:53.154452 | orchestrator | 2026-01-10 14:40:53.154459 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-10 14:40:53.154465 | orchestrator | Saturday 10 January 2026 14:39:27 +0000 (0:00:02.011) 0:00:21.182 ****** 2026-01-10 14:40:53.154471 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154477 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154489 | orchestrator | 2026-01-10 14:40:53.154495 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-10 14:40:53.154501 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.321) 0:00:21.504 ****** 2026-01-10 14:40:53.154508 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154514 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154520 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154541 | orchestrator | 2026-01-10 14:40:53.154549 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:40:53.154555 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.280) 0:00:21.785 ****** 2026-01-10 14:40:53.154561 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:53.154568 | orchestrator | 2026-01-10 14:40:53.154573 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-10 14:40:53.154579 | orchestrator | Saturday 10 January 2026 14:39:29 +0000 (0:00:00.748) 0:00:22.533 ****** 2026-01-10 14:40:53.154594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154640 | orchestrator | 2026-01-10 14:40:53.154648 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-10 14:40:53.154654 | orchestrator | Saturday 10 January 2026 14:39:30 +0000 (0:00:01.521) 0:00:24.055 ****** 2026-01-10 14:40:53.154667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154675 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154702 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154716 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154723 | orchestrator | 2026-01-10 14:40:53.154730 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-10 14:40:53.154737 | orchestrator | Saturday 10 January 2026 14:39:31 +0000 (0:00:00.790) 0:00:24.846 ****** 2026-01-10 14:40:53.154751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154764 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:40:53.154827 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154835 | orchestrator | 2026-01-10 14:40:53.154842 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-10 14:40:53.154849 | orchestrator | Saturday 10 January 2026 14:39:32 +0000 (0:00:00.848) 0:00:25.694 ****** 2026-01-10 14:40:53.154859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:40:53.154902 | orchestrator | 2026-01-10 14:40:53.154909 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:40:53.154916 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:01.434) 0:00:27.129 ****** 2026-01-10 14:40:53.154923 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:40:53.154929 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:40:53.154935 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:40:53.154941 | orchestrator | 2026-01-10 14:40:53.154948 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:40:53.154954 | orchestrator | Saturday 10 January 2026 14:39:34 +0000 (0:00:00.292) 0:00:27.421 ****** 2026-01-10 14:40:53.154961 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:40:53.154969 | orchestrator | 2026-01-10 14:40:53.154976 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-10 14:40:53.154986 | orchestrator | Saturday 10 January 2026 14:39:34 +0000 (0:00:00.533) 0:00:27.955 ****** 2026-01-10 14:40:53.154993 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:53.155000 | orchestrator | 2026-01-10 14:40:53.155007 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-10 14:40:53.155014 | orchestrator | Saturday 10 January 2026 14:39:37 +0000 (0:00:02.790) 0:00:30.745 ****** 2026-01-10 14:40:53.155021 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:53.155028 | orchestrator | 2026-01-10 14:40:53.155036 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-10 14:40:53.155043 | orchestrator | Saturday 10 January 2026 14:39:40 +0000 (0:00:02.929) 0:00:33.675 ****** 2026-01-10 14:40:53.155050 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:53.155057 | orchestrator | 2026-01-10 14:40:53.155064 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:40:53.155071 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:18.286) 0:00:51.961 ****** 2026-01-10 14:40:53.155078 | orchestrator | 2026-01-10 14:40:53.155085 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:40:53.155092 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:00.073) 0:00:52.035 ****** 2026-01-10 14:40:53.155099 | orchestrator | 2026-01-10 14:40:53.155106 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:40:53.155113 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:00.065) 0:00:52.100 ****** 2026-01-10 14:40:53.155120 | orchestrator | 2026-01-10 14:40:53.155128 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-10 14:40:53.155134 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:00.066) 0:00:52.167 ****** 2026-01-10 14:40:53.155142 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:40:53.155149 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:40:53.155156 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:40:53.155163 | orchestrator | 2026-01-10 14:40:53.155170 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:40:53.155177 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:40:53.155185 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-10 14:40:53.155192 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-10 14:40:53.155204 | orchestrator | 2026-01-10 14:40:53.155211 | orchestrator | 2026-01-10 14:40:53.155218 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:40:53.155225 | orchestrator | Saturday 10 January 2026 14:40:52 +0000 (0:00:53.293) 0:01:45.461 ****** 2026-01-10 14:40:53.155232 | orchestrator | =============================================================================== 2026-01-10 14:40:53.155239 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.29s 2026-01-10 14:40:53.155247 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.29s 2026-01-10 14:40:53.155254 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.93s 2026-01-10 14:40:53.155261 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.79s 2026-01-10 14:40:53.155268 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.10s 2026-01-10 14:40:53.155275 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.01s 2026-01-10 14:40:53.155282 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2026-01-10 14:40:53.155289 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.70s 2026-01-10 14:40:53.155296 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2026-01-10 14:40:53.155303 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.43s 2026-01-10 14:40:53.155309 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.33s 2026-01-10 14:40:53.155318 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-01-10 14:40:53.155323 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2026-01-10 14:40:53.155329 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-01-10 14:40:53.155334 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-01-10 14:40:53.155339 | orchestrator | horizon : Update policy file name --------------------------------------- 0.71s 2026-01-10 14:40:53.155346 | orchestrator | horizon : Update policy file name --------------------------------------- 0.69s 2026-01-10 14:40:53.155353 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-01-10 14:40:53.155360 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-01-10 14:40:53.155366 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-01-10 14:40:53.155374 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:53.155381 | orchestrator | 2026-01-10 14:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:56.194682 | orchestrator | 2026-01-10 14:40:56 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:56.196437 | orchestrator | 2026-01-10 14:40:56 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:56.196523 | orchestrator | 2026-01-10 14:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:59.241640 | orchestrator | 2026-01-10 14:40:59 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:40:59.243951 | orchestrator | 2026-01-10 14:40:59 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:40:59.244116 | orchestrator | 2026-01-10 14:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:02.300499 | orchestrator | 2026-01-10 14:41:02 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:02.304086 | orchestrator | 2026-01-10 14:41:02 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state STARTED 2026-01-10 14:41:02.304195 | orchestrator | 2026-01-10 14:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:05.347937 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:05.348577 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:05.350673 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task 87af6f8e-b292-469c-9e4a-e3a924a29f9a is in state SUCCESS 2026-01-10 14:41:05.351948 | orchestrator | 2026-01-10 14:41:05.351978 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:41:05.351985 | orchestrator | 2.16.14 2026-01-10 14:41:05.351992 | orchestrator | 2026-01-10 14:41:05.352016 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-10 14:41:05.352024 | orchestrator | 2026-01-10 14:41:05.352030 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:41:05.352048 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:00.607) 0:00:00.607 ****** 2026-01-10 14:41:05.352054 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:05.352061 | orchestrator | 2026-01-10 14:41:05.352067 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:41:05.352073 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:00.657) 0:00:01.265 ****** 2026-01-10 14:41:05.352090 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352097 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352103 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352109 | orchestrator | 2026-01-10 14:41:05.352114 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:41:05.352120 | orchestrator | Saturday 10 January 2026 14:38:50 +0000 (0:00:00.616) 0:00:01.881 ****** 2026-01-10 14:41:05.352126 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352132 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352137 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352143 | orchestrator | 2026-01-10 14:41:05.352149 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:41:05.352155 | orchestrator | Saturday 10 January 2026 14:38:50 +0000 (0:00:00.346) 0:00:02.227 ****** 2026-01-10 14:41:05.352160 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352166 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352172 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352177 | orchestrator | 2026-01-10 14:41:05.352183 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:41:05.352189 | orchestrator | Saturday 10 January 2026 14:38:51 +0000 (0:00:00.861) 0:00:03.089 ****** 2026-01-10 14:41:05.352195 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352201 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352206 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352212 | orchestrator | 2026-01-10 14:41:05.352217 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:41:05.352222 | orchestrator | Saturday 10 January 2026 14:38:51 +0000 (0:00:00.330) 0:00:03.419 ****** 2026-01-10 14:41:05.352228 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352234 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352240 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352245 | orchestrator | 2026-01-10 14:41:05.352259 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:41:05.352266 | orchestrator | Saturday 10 January 2026 14:38:52 +0000 (0:00:00.320) 0:00:03.739 ****** 2026-01-10 14:41:05.352272 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352277 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352297 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352303 | orchestrator | 2026-01-10 14:41:05.352308 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:41:05.352314 | orchestrator | Saturday 10 January 2026 14:38:52 +0000 (0:00:00.324) 0:00:04.064 ****** 2026-01-10 14:41:05.352320 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352340 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.352346 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.352352 | orchestrator | 2026-01-10 14:41:05.352403 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:41:05.352426 | orchestrator | Saturday 10 January 2026 14:38:53 +0000 (0:00:00.534) 0:00:04.599 ****** 2026-01-10 14:41:05.352433 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352439 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352444 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352450 | orchestrator | 2026-01-10 14:41:05.352456 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:41:05.352462 | orchestrator | Saturday 10 January 2026 14:38:53 +0000 (0:00:00.282) 0:00:04.881 ****** 2026-01-10 14:41:05.352468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:05.352474 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:05.352480 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:05.352485 | orchestrator | 2026-01-10 14:41:05.352491 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:41:05.352550 | orchestrator | Saturday 10 January 2026 14:38:54 +0000 (0:00:00.650) 0:00:05.532 ****** 2026-01-10 14:41:05.352557 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352562 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352568 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352572 | orchestrator | 2026-01-10 14:41:05.352577 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:41:05.352583 | orchestrator | Saturday 10 January 2026 14:38:54 +0000 (0:00:00.442) 0:00:05.974 ****** 2026-01-10 14:41:05.352588 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:05.352593 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:05.352598 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:05.352603 | orchestrator | 2026-01-10 14:41:05.352609 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:41:05.352615 | orchestrator | Saturday 10 January 2026 14:38:56 +0000 (0:00:02.342) 0:00:08.317 ****** 2026-01-10 14:41:05.352622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:41:05.352628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:41:05.352634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:41:05.352639 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352644 | orchestrator | 2026-01-10 14:41:05.352659 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:41:05.352666 | orchestrator | Saturday 10 January 2026 14:38:57 +0000 (0:00:00.662) 0:00:08.979 ****** 2026-01-10 14:41:05.352674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352689 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352696 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352703 | orchestrator | 2026-01-10 14:41:05.352708 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:41:05.352722 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:00.823) 0:00:09.803 ****** 2026-01-10 14:41:05.352729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.352755 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352762 | orchestrator | 2026-01-10 14:41:05.352768 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:41:05.352775 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:00.356) 0:00:10.160 ****** 2026-01-10 14:41:05.352784 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f716979d2299', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:38:55.331545', 'end': '2026-01-10 14:38:55.377782', 'delta': '0:00:00.046237', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f716979d2299'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-10 14:41:05.352794 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '55e9cfd78d87', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:38:56.154271', 'end': '2026-01-10 14:38:56.189818', 'delta': '0:00:00.035547', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['55e9cfd78d87'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-10 14:41:05.352806 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9d52b736b203', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:38:56.692117', 'end': '2026-01-10 14:38:56.733960', 'delta': '0:00:00.041843', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9d52b736b203'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-10 14:41:05.352818 | orchestrator | 2026-01-10 14:41:05.352824 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:41:05.352831 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:00.192) 0:00:10.352 ****** 2026-01-10 14:41:05.352837 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.352844 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.352850 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.352857 | orchestrator | 2026-01-10 14:41:05.352864 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:41:05.352870 | orchestrator | Saturday 10 January 2026 14:38:59 +0000 (0:00:00.648) 0:00:11.000 ****** 2026-01-10 14:41:05.352898 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-10 14:41:05.352905 | orchestrator | 2026-01-10 14:41:05.352912 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:41:05.352919 | orchestrator | Saturday 10 January 2026 14:39:01 +0000 (0:00:01.798) 0:00:12.799 ****** 2026-01-10 14:41:05.352926 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352933 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.352940 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.352946 | orchestrator | 2026-01-10 14:41:05.352952 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:41:05.352959 | orchestrator | Saturday 10 January 2026 14:39:01 +0000 (0:00:00.319) 0:00:13.119 ****** 2026-01-10 14:41:05.352965 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.352971 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.352977 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.352983 | orchestrator | 2026-01-10 14:41:05.352992 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:41:05.352998 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.439) 0:00:13.559 ****** 2026-01-10 14:41:05.353004 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353010 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353080 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353086 | orchestrator | 2026-01-10 14:41:05.353092 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:41:05.353098 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.535) 0:00:14.094 ****** 2026-01-10 14:41:05.353103 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.353109 | orchestrator | 2026-01-10 14:41:05.353115 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:41:05.353121 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:00.141) 0:00:14.235 ****** 2026-01-10 14:41:05.353126 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353132 | orchestrator | 2026-01-10 14:41:05.353138 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:41:05.353144 | orchestrator | Saturday 10 January 2026 14:39:03 +0000 (0:00:00.251) 0:00:14.486 ****** 2026-01-10 14:41:05.353150 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353155 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353161 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353166 | orchestrator | 2026-01-10 14:41:05.353172 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:41:05.353178 | orchestrator | Saturday 10 January 2026 14:39:03 +0000 (0:00:00.311) 0:00:14.798 ****** 2026-01-10 14:41:05.353183 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353189 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353195 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353201 | orchestrator | 2026-01-10 14:41:05.353207 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:41:05.353212 | orchestrator | Saturday 10 January 2026 14:39:03 +0000 (0:00:00.324) 0:00:15.122 ****** 2026-01-10 14:41:05.353218 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353224 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353234 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353240 | orchestrator | 2026-01-10 14:41:05.353246 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:41:05.353251 | orchestrator | Saturday 10 January 2026 14:39:04 +0000 (0:00:00.542) 0:00:15.665 ****** 2026-01-10 14:41:05.353257 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353263 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353269 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353275 | orchestrator | 2026-01-10 14:41:05.353281 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:41:05.353287 | orchestrator | Saturday 10 January 2026 14:39:04 +0000 (0:00:00.332) 0:00:15.998 ****** 2026-01-10 14:41:05.353292 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353299 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353304 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353310 | orchestrator | 2026-01-10 14:41:05.353316 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:41:05.353322 | orchestrator | Saturday 10 January 2026 14:39:04 +0000 (0:00:00.324) 0:00:16.323 ****** 2026-01-10 14:41:05.353328 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353334 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353340 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353350 | orchestrator | 2026-01-10 14:41:05.353356 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:41:05.353362 | orchestrator | Saturday 10 January 2026 14:39:05 +0000 (0:00:00.418) 0:00:16.742 ****** 2026-01-10 14:41:05.353368 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353373 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353379 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.353385 | orchestrator | 2026-01-10 14:41:05.353391 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:41:05.353396 | orchestrator | Saturday 10 January 2026 14:39:05 +0000 (0:00:00.558) 0:00:17.301 ****** 2026-01-10 14:41:05.353403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e', 'dm-uuid-LVM-XVOmmU8gw9B369gyxlceU1KPl5227E4OnHZY8euzWOfpRxy0f0KzZTzTJfXguFbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e', 'dm-uuid-LVM-pqJmM8ieqWZ6BdY530dv83iHOMYrza8a16k50Rvgm1IhOwTHfYLJwUFE1CPFcmjp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A2f7Z4-KCNH-W5Ce-ou5s-feTB-RgoC-qTsSaF', 'scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe', 'scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad', 'dm-uuid-LVM-qB82mU0uRSY6RHhcksnqy9N8MyTE4sXUt2kgIbhVbEctaWIrtigAJOrMzz6Tn28Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DwDCtJ-LyMf-XzrY-Eff3-Djlk-vdWz-pf7GZs', 'scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341', 'scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a', 'scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be', 'dm-uuid-LVM-rs3eayTU9p4tHP9XXl5XuCOOAEpJ4KkoVh8Fw56E0fiOqhRkxS8Qh0ZKStJpEIbA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353597 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.353604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xIeQyy-K7KP-9fGF-640Y-OGvx-NBxv-nPopt0', 'scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e', 'scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fqjcD3-KDxr-vbAM-6N0Q-cc7U-P1SB-bgaSVv', 'scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2', 'scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c', 'scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f', 'dm-uuid-LVM-7cucNAsaiAAotIpLmbIAdQU43KNMMITqiNtsEoSbWPIVcHr7jJK8P2eJW6H4b5ym'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.353672 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.353960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e', 'dm-uuid-LVM-XVGDo2c7ar3U5yej56EfTBud9IPUfNDALDqim7D21QW70LyT1U2UjGoboBptL3og'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.353987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:05.354067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.354081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yk7Nti-nWl1-IsZK-CxIA-L5NY-lYh9-PSyeZY', 'scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc', 'scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.354087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Orc67S-vipX-AVMa-hkR8-UvV0-2ko5-K0ZhW3', 'scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076', 'scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.354093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6', 'scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.354103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:05.354109 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354115 | orchestrator | 2026-01-10 14:41:05.354120 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:41:05.354126 | orchestrator | Saturday 10 January 2026 14:39:06 +0000 (0:00:00.776) 0:00:18.077 ****** 2026-01-10 14:41:05.354132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e', 'dm-uuid-LVM-XVOmmU8gw9B369gyxlceU1KPl5227E4OnHZY8euzWOfpRxy0f0KzZTzTJfXguFbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e', 'dm-uuid-LVM-pqJmM8ieqWZ6BdY530dv83iHOMYrza8a16k50Rvgm1IhOwTHfYLJwUFE1CPFcmjp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad', 'dm-uuid-LVM-qB82mU0uRSY6RHhcksnqy9N8MyTE4sXUt2kgIbhVbEctaWIrtigAJOrMzz6Tn28Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be', 'dm-uuid-LVM-rs3eayTU9p4tHP9XXl5XuCOOAEpJ4KkoVh8Fw56E0fiOqhRkxS8Qh0ZKStJpEIbA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16', 'scsi-SQEMU_QEMU_HARDDISK_9218c5d8-5f0e-4ef3-b14f-4b2502394196-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354274 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e-osd--block--2f4cdd2b--88b0--5432--8a57--fbfff03caf8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A2f7Z4-KCNH-W5Ce-ou5s-feTB-RgoC-qTsSaF', 'scsi-0QEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe', 'scsi-SQEMU_QEMU_HARDDISK_fb1cd23c-1eba-48f8-b0af-e37f12bddfbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354293 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aeb55798--e032--5872--951c--62472db4891e-osd--block--aeb55798--e032--5872--951c--62472db4891e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DwDCtJ-LyMf-XzrY-Eff3-Djlk-vdWz-pf7GZs', 'scsi-0QEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341', 'scsi-SQEMU_QEMU_HARDDISK_2ce7cca4-0817-4dba-a1e7-697e67028341'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a', 'scsi-SQEMU_QEMU_HARDDISK_644eb2b6-5717-40d5-adcd-cd376a39a92a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354348 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8c985bfc-a5bb-40d1-ad90-a588790d178e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354359 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--381f50a6--56c2--5a32--835b--1a08246466ad-osd--block--381f50a6--56c2--5a32--835b--1a08246466ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xIeQyy-K7KP-9fGF-640Y-OGvx-NBxv-nPopt0', 'scsi-0QEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e', 'scsi-SQEMU_QEMU_HARDDISK_4c46785e-60ba-460b-8af0-69ed9944293e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5a6c1f07--f96f--5f9c--9404--64a84774a9be-osd--block--5a6c1f07--f96f--5f9c--9404--64a84774a9be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fqjcD3-KDxr-vbAM-6N0Q-cc7U-P1SB-bgaSVv', 'scsi-0QEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2', 'scsi-SQEMU_QEMU_HARDDISK_f60c9e3f-4fb9-4762-8319-6decaa6c25a2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c', 'scsi-SQEMU_QEMU_HARDDISK_56640cac-7dbd-450f-ace0-5456f0f7a79c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354389 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354398 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f', 'dm-uuid-LVM-7cucNAsaiAAotIpLmbIAdQU43KNMMITqiNtsEoSbWPIVcHr7jJK8P2eJW6H4b5ym'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e', 'dm-uuid-LVM-XVGDo2c7ar3U5yej56EfTBud9IPUfNDALDqim7D21QW70LyT1U2UjGoboBptL3og'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354426 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8fa62895-cbfb-4207-9a20-878bfa0ed6d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f-osd--block--f26dfcab--b4e5--55cc--b0d4--5a4bbd1b375f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yk7Nti-nWl1-IsZK-CxIA-L5NY-lYh9-PSyeZY', 'scsi-0QEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc', 'scsi-SQEMU_QEMU_HARDDISK_6601bfae-4805-46bf-9ab8-35c841e000dc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8e61bc65--6745--5d05--9905--13a4cfa0641e-osd--block--8e61bc65--6745--5d05--9905--13a4cfa0641e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Orc67S-vipX-AVMa-hkR8-UvV0-2ko5-K0ZhW3', 'scsi-0QEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076', 'scsi-SQEMU_QEMU_HARDDISK_80389416-edd4-4aaf-b80d-5b05821e7076'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6', 'scsi-SQEMU_QEMU_HARDDISK_e023e992-ae40-4cae-8e0e-c078bcc164d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:05.354544 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354550 | orchestrator | 2026-01-10 14:41:05.354555 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:41:05.354562 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.788) 0:00:18.866 ****** 2026-01-10 14:41:05.354567 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.354573 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.354580 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.354586 | orchestrator | 2026-01-10 14:41:05.354593 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:41:05.354599 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:00.735) 0:00:19.602 ****** 2026-01-10 14:41:05.354606 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.354612 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.354618 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.354624 | orchestrator | 2026-01-10 14:41:05.354630 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:41:05.354636 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:00.499) 0:00:20.101 ****** 2026-01-10 14:41:05.354640 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.354645 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.354650 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.354655 | orchestrator | 2026-01-10 14:41:05.354661 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:41:05.354668 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:00.731) 0:00:20.832 ****** 2026-01-10 14:41:05.354674 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354681 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354688 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354694 | orchestrator | 2026-01-10 14:41:05.354701 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:41:05.354708 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:00.316) 0:00:21.149 ****** 2026-01-10 14:41:05.354715 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354721 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354726 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354732 | orchestrator | 2026-01-10 14:41:05.354741 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:41:05.354747 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:00.405) 0:00:21.555 ****** 2026-01-10 14:41:05.354753 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354759 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354765 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354771 | orchestrator | 2026-01-10 14:41:05.354777 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:41:05.354782 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:00.544) 0:00:22.099 ****** 2026-01-10 14:41:05.354788 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:41:05.354794 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:41:05.354800 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:41:05.354806 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:41:05.354815 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:41:05.354821 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:41:05.354826 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:41:05.354832 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:41:05.354838 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:41:05.354843 | orchestrator | 2026-01-10 14:41:05.354849 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:41:05.354854 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:01.115) 0:00:23.215 ****** 2026-01-10 14:41:05.354860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:41:05.354866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:41:05.354872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:41:05.354877 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:41:05.354889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:41:05.354895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:41:05.354901 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354907 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:41:05.354913 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:41:05.354919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:41:05.354925 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354930 | orchestrator | 2026-01-10 14:41:05.354936 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:41:05.354942 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.360) 0:00:23.575 ****** 2026-01-10 14:41:05.354948 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:05.354954 | orchestrator | 2026-01-10 14:41:05.354960 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:41:05.354967 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.695) 0:00:24.271 ****** 2026-01-10 14:41:05.354976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.354983 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.354988 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.354994 | orchestrator | 2026-01-10 14:41:05.355000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:41:05.355006 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.333) 0:00:24.605 ****** 2026-01-10 14:41:05.355012 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355017 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.355023 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.355029 | orchestrator | 2026-01-10 14:41:05.355035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:41:05.355040 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.307) 0:00:24.912 ****** 2026-01-10 14:41:05.355046 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355051 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.355057 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:05.355062 | orchestrator | 2026-01-10 14:41:05.355067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:41:05.355073 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:00.320) 0:00:25.233 ****** 2026-01-10 14:41:05.355079 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.355085 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.355091 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.355096 | orchestrator | 2026-01-10 14:41:05.355102 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:41:05.355116 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.895) 0:00:26.129 ****** 2026-01-10 14:41:05.355122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:05.355128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:05.355133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:05.355139 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355145 | orchestrator | 2026-01-10 14:41:05.355151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:41:05.355156 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.379) 0:00:26.508 ****** 2026-01-10 14:41:05.355162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:05.355168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:05.355173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:05.355179 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355184 | orchestrator | 2026-01-10 14:41:05.355190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:41:05.355199 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.373) 0:00:26.881 ****** 2026-01-10 14:41:05.355205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:05.355211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:05.355217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:05.355222 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355228 | orchestrator | 2026-01-10 14:41:05.355234 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:41:05.355240 | orchestrator | Saturday 10 January 2026 14:39:15 +0000 (0:00:00.379) 0:00:27.260 ****** 2026-01-10 14:41:05.355246 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:05.355252 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:05.355258 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:05.355264 | orchestrator | 2026-01-10 14:41:05.355270 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:41:05.355276 | orchestrator | Saturday 10 January 2026 14:39:16 +0000 (0:00:00.337) 0:00:27.598 ****** 2026-01-10 14:41:05.355281 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:41:05.355287 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:41:05.355293 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:41:05.355299 | orchestrator | 2026-01-10 14:41:05.355305 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:41:05.355311 | orchestrator | Saturday 10 January 2026 14:39:16 +0000 (0:00:00.562) 0:00:28.160 ****** 2026-01-10 14:41:05.355317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:05.355322 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:05.355328 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:05.355333 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:41:05.355339 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:41:05.355345 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:41:05.355350 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:41:05.355356 | orchestrator | 2026-01-10 14:41:05.355362 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:41:05.355367 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:01.048) 0:00:29.208 ****** 2026-01-10 14:41:05.355373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:05.355378 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:05.355388 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:05.355394 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:41:05.355400 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:41:05.355405 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:41:05.355414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:41:05.355421 | orchestrator | 2026-01-10 14:41:05.355427 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-10 14:41:05.355432 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:01.978) 0:00:31.187 ****** 2026-01-10 14:41:05.355438 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:05.355444 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:05.355450 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-10 14:41:05.355456 | orchestrator | 2026-01-10 14:41:05.355461 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-10 14:41:05.355466 | orchestrator | Saturday 10 January 2026 14:39:20 +0000 (0:00:00.408) 0:00:31.596 ****** 2026-01-10 14:41:05.355473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:05.355480 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:05.355486 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:05.355492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:05.355501 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:05.355507 | orchestrator | 2026-01-10 14:41:05.355513 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-10 14:41:05.355547 | orchestrator | Saturday 10 January 2026 14:40:08 +0000 (0:00:48.723) 0:01:20.320 ****** 2026-01-10 14:41:05.355553 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355565 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355577 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355588 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:41:05.355594 | orchestrator | 2026-01-10 14:41:05.355600 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-10 14:41:05.355610 | orchestrator | Saturday 10 January 2026 14:40:33 +0000 (0:00:24.148) 0:01:44.468 ****** 2026-01-10 14:41:05.355616 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355621 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355627 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355632 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355637 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355642 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355647 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:05.355652 | orchestrator | 2026-01-10 14:41:05.355657 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-10 14:41:05.355662 | orchestrator | Saturday 10 January 2026 14:40:45 +0000 (0:00:12.736) 0:01:57.205 ****** 2026-01-10 14:41:05.355668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355674 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355679 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355685 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355691 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355700 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355707 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355713 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355718 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355724 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355730 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355736 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355747 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355753 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355759 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:05.355765 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:05.355770 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:05.355776 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-10 14:41:05.355782 | orchestrator | 2026-01-10 14:41:05.355787 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:41:05.355793 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:41:05.355799 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:41:05.355805 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:41:05.355811 | orchestrator | 2026-01-10 14:41:05.355816 | orchestrator | 2026-01-10 14:41:05.355822 | orchestrator | 2026-01-10 14:41:05.355832 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:41:05.355840 | orchestrator | Saturday 10 January 2026 14:41:03 +0000 (0:00:18.029) 0:02:15.235 ****** 2026-01-10 14:41:05.355846 | orchestrator | =============================================================================== 2026-01-10 14:41:05.355851 | orchestrator | create openstack pool(s) ----------------------------------------------- 48.72s 2026-01-10 14:41:05.355857 | orchestrator | generate keys ---------------------------------------------------------- 24.15s 2026-01-10 14:41:05.355863 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.03s 2026-01-10 14:41:05.355868 | orchestrator | get keys from monitors ------------------------------------------------- 12.74s 2026-01-10 14:41:05.355874 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.34s 2026-01-10 14:41:05.355880 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.98s 2026-01-10 14:41:05.355885 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.80s 2026-01-10 14:41:05.355891 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.12s 2026-01-10 14:41:05.355897 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.05s 2026-01-10 14:41:05.355903 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.90s 2026-01-10 14:41:05.355908 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-01-10 14:41:05.355914 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.82s 2026-01-10 14:41:05.355920 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.79s 2026-01-10 14:41:05.355925 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.78s 2026-01-10 14:41:05.355931 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2026-01-10 14:41:05.355937 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-01-10 14:41:05.355942 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-01-10 14:41:05.355948 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-01-10 14:41:05.355954 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2026-01-10 14:41:05.355959 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-01-10 14:41:05.355965 | orchestrator | 2026-01-10 14:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:08.404977 | orchestrator | 2026-01-10 14:41:08 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:08.405804 | orchestrator | 2026-01-10 14:41:08 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:08.405844 | orchestrator | 2026-01-10 14:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:11.457655 | orchestrator | 2026-01-10 14:41:11 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:11.460105 | orchestrator | 2026-01-10 14:41:11 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:11.460159 | orchestrator | 2026-01-10 14:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:14.511376 | orchestrator | 2026-01-10 14:41:14 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:14.513108 | orchestrator | 2026-01-10 14:41:14 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:14.513157 | orchestrator | 2026-01-10 14:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:17.561745 | orchestrator | 2026-01-10 14:41:17 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:17.563457 | orchestrator | 2026-01-10 14:41:17 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:17.563581 | orchestrator | 2026-01-10 14:41:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:20.625032 | orchestrator | 2026-01-10 14:41:20 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:20.628586 | orchestrator | 2026-01-10 14:41:20 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:20.628673 | orchestrator | 2026-01-10 14:41:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:23.691529 | orchestrator | 2026-01-10 14:41:23 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:23.693115 | orchestrator | 2026-01-10 14:41:23 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:23.693140 | orchestrator | 2026-01-10 14:41:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:26.755852 | orchestrator | 2026-01-10 14:41:26 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:26.757121 | orchestrator | 2026-01-10 14:41:26 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:26.757171 | orchestrator | 2026-01-10 14:41:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:29.808713 | orchestrator | 2026-01-10 14:41:29 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:29.809068 | orchestrator | 2026-01-10 14:41:29 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:29.809091 | orchestrator | 2026-01-10 14:41:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:32.868481 | orchestrator | 2026-01-10 14:41:32 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:32.871312 | orchestrator | 2026-01-10 14:41:32 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:32.871392 | orchestrator | 2026-01-10 14:41:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:35.928049 | orchestrator | 2026-01-10 14:41:35 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:35.929474 | orchestrator | 2026-01-10 14:41:35 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:35.929608 | orchestrator | 2026-01-10 14:41:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:38.986771 | orchestrator | 2026-01-10 14:41:38 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:38.987907 | orchestrator | 2026-01-10 14:41:38 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:38.987945 | orchestrator | 2026-01-10 14:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:42.051459 | orchestrator | 2026-01-10 14:41:42 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:42.053991 | orchestrator | 2026-01-10 14:41:42 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state STARTED 2026-01-10 14:41:42.054610 | orchestrator | 2026-01-10 14:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:45.102209 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:45.104062 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task c8a4c3da-5e7f-4f4d-be69-2e252a3e70a8 is in state SUCCESS 2026-01-10 14:41:45.105721 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:41:45.105946 | orchestrator | 2026-01-10 14:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:48.153122 | orchestrator | 2026-01-10 14:41:48 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state STARTED 2026-01-10 14:41:48.157383 | orchestrator | 2026-01-10 14:41:48 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:41:48.157439 | orchestrator | 2026-01-10 14:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:51.236683 | orchestrator | 2026-01-10 14:41:51.236775 | orchestrator | 2026-01-10 14:41:51.236788 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-10 14:41:51.236796 | orchestrator | 2026-01-10 14:41:51.236803 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-10 14:41:51.236809 | orchestrator | Saturday 10 January 2026 14:41:08 +0000 (0:00:00.166) 0:00:00.167 ****** 2026-01-10 14:41:51.236815 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:41:51.236823 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.236829 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.236835 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:41:51.236841 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.236848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:41:51.236854 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:41:51.236859 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:41:51.236993 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:41:51.236999 | orchestrator | 2026-01-10 14:41:51.237003 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-10 14:41:51.237007 | orchestrator | Saturday 10 January 2026 14:41:13 +0000 (0:00:04.390) 0:00:04.557 ****** 2026-01-10 14:41:51.237011 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:41:51.237015 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237033 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237037 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:41:51.237041 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237044 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:41:51.237048 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:41:51.237052 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:41:51.237058 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:41:51.237064 | orchestrator | 2026-01-10 14:41:51.237069 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-10 14:41:51.237077 | orchestrator | Saturday 10 January 2026 14:41:17 +0000 (0:00:04.625) 0:00:09.182 ****** 2026-01-10 14:41:51.237087 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:41:51.237093 | orchestrator | 2026-01-10 14:41:51.237098 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-10 14:41:51.237105 | orchestrator | Saturday 10 January 2026 14:41:18 +0000 (0:00:01.047) 0:00:10.230 ****** 2026-01-10 14:41:51.237135 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-10 14:41:51.237143 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237149 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237154 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:41:51.237158 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237163 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-10 14:41:51.237167 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-10 14:41:51.237172 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:41:51.237176 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-10 14:41:51.237180 | orchestrator | 2026-01-10 14:41:51.237185 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-10 14:41:51.237189 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:13.623) 0:00:23.853 ****** 2026-01-10 14:41:51.237193 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-10 14:41:51.237198 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-10 14:41:51.237202 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:41:51.237207 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:41:51.237224 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:41:51.237229 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:41:51.237234 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-10 14:41:51.237238 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-10 14:41:51.237242 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-10 14:41:51.237246 | orchestrator | 2026-01-10 14:41:51.237251 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-10 14:41:51.237255 | orchestrator | Saturday 10 January 2026 14:41:35 +0000 (0:00:03.446) 0:00:27.300 ****** 2026-01-10 14:41:51.237262 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-10 14:41:51.237269 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237275 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237281 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:41:51.237287 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:41:51.237294 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-10 14:41:51.237300 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-10 14:41:51.237306 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:41:51.237312 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-10 14:41:51.237319 | orchestrator | 2026-01-10 14:41:51.237325 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:41:51.237333 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:41:51.237340 | orchestrator | 2026-01-10 14:41:51.237349 | orchestrator | 2026-01-10 14:41:51.237358 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:41:51.237362 | orchestrator | Saturday 10 January 2026 14:41:43 +0000 (0:00:07.402) 0:00:34.703 ****** 2026-01-10 14:41:51.237366 | orchestrator | =============================================================================== 2026-01-10 14:41:51.237372 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.62s 2026-01-10 14:41:51.237378 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.40s 2026-01-10 14:41:51.237384 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.63s 2026-01-10 14:41:51.237391 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.39s 2026-01-10 14:41:51.237397 | orchestrator | Check if target directories exist --------------------------------------- 3.45s 2026-01-10 14:41:51.237404 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-01-10 14:41:51.237410 | orchestrator | 2026-01-10 14:41:51.237416 | orchestrator | 2026-01-10 14:41:51.237423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:41:51.237429 | orchestrator | 2026-01-10 14:41:51.237435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:41:51.237443 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.403) 0:00:00.403 ****** 2026-01-10 14:41:51.237448 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.237454 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:51.237460 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:51.237466 | orchestrator | 2026-01-10 14:41:51.237495 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:41:51.237501 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:00.337) 0:00:00.741 ****** 2026-01-10 14:41:51.237507 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:41:51.237514 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:41:51.237520 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:41:51.237526 | orchestrator | 2026-01-10 14:41:51.237532 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-10 14:41:51.237537 | orchestrator | 2026-01-10 14:41:51.237544 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.237550 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:00.431) 0:00:01.172 ****** 2026-01-10 14:41:51.237555 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:51.237561 | orchestrator | 2026-01-10 14:41:51.237567 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-10 14:41:51.237572 | orchestrator | Saturday 10 January 2026 14:39:08 +0000 (0:00:00.567) 0:00:01.740 ****** 2026-01-10 14:41:51.237593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237688 | orchestrator | 2026-01-10 14:41:51.237694 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-10 14:41:51.237700 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:01.984) 0:00:03.724 ****** 2026-01-10 14:41:51.237706 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.237713 | orchestrator | 2026-01-10 14:41:51.237720 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-10 14:41:51.237727 | orchestrator | Saturday 10 January 2026 14:39:10 +0000 (0:00:00.114) 0:00:03.839 ****** 2026-01-10 14:41:51.237734 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.237741 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.237747 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.237753 | orchestrator | 2026-01-10 14:41:51.237759 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-10 14:41:51.237765 | orchestrator | Saturday 10 January 2026 14:39:11 +0000 (0:00:00.615) 0:00:04.454 ****** 2026-01-10 14:41:51.237771 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:41:51.237777 | orchestrator | 2026-01-10 14:41:51.237783 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.237789 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.877) 0:00:05.332 ****** 2026-01-10 14:41:51.237795 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:51.237800 | orchestrator | 2026-01-10 14:41:51.237807 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-10 14:41:51.237813 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:00.530) 0:00:05.863 ****** 2026-01-10 14:41:51.237826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.237846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.237906 | orchestrator | 2026-01-10 14:41:51.237910 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-10 14:41:51.237914 | orchestrator | Saturday 10 January 2026 14:39:16 +0000 (0:00:03.633) 0:00:09.497 ****** 2026-01-10 14:41:51.237918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.237923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.237930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.237934 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.237941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.237948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.237952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.237956 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.237960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.237971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.237980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.237984 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.237988 | orchestrator | 2026-01-10 14:41:51.237992 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-10 14:41:51.237996 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:01.031) 0:00:10.528 ****** 2026-01-10 14:41:51.238003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238074 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh2026-01-10 14:41:51 | INFO  | Task d9ab70d1-36e9-4757-b124-0aaf8ddcdbd3 is in state SUCCESS 2026-01-10 14:41:51.238271 | orchestrator | :2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238284 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238310 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238314 | orchestrator | 2026-01-10 14:41:51.238317 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-10 14:41:51.238321 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:00.896) 0:00:11.425 ****** 2026-01-10 14:41:51.238329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238379 | orchestrator | 2026-01-10 14:41:51.238385 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-10 14:41:51.238395 | orchestrator | Saturday 10 January 2026 14:39:21 +0000 (0:00:03.499) 0:00:14.925 ****** 2026-01-10 14:41:51.238401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.238431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.238449 | orchestrator | 2026-01-10 14:41:51.238453 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-10 14:41:51.238457 | orchestrator | Saturday 10 January 2026 14:39:26 +0000 (0:00:05.208) 0:00:20.133 ****** 2026-01-10 14:41:51.238461 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.238465 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:51.238469 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:51.238472 | orchestrator | 2026-01-10 14:41:51.238515 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-10 14:41:51.238521 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:01.461) 0:00:21.595 ****** 2026-01-10 14:41:51.238525 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238528 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238533 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238539 | orchestrator | 2026-01-10 14:41:51.238545 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-10 14:41:51.238551 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.541) 0:00:22.137 ****** 2026-01-10 14:41:51.238567 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238580 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238586 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238592 | orchestrator | 2026-01-10 14:41:51.238690 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-10 14:41:51.238701 | orchestrator | Saturday 10 January 2026 14:39:29 +0000 (0:00:00.296) 0:00:22.434 ****** 2026-01-10 14:41:51.238707 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238713 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238719 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238724 | orchestrator | 2026-01-10 14:41:51.238730 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-10 14:41:51.238824 | orchestrator | Saturday 10 January 2026 14:39:29 +0000 (0:00:00.473) 0:00:22.908 ****** 2026-01-10 14:41:51.238833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238852 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238880 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-10 14:41:51.238888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:41:51.238895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:41:51.238900 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238903 | orchestrator | 2026-01-10 14:41:51.238907 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.238917 | orchestrator | Saturday 10 January 2026 14:39:30 +0000 (0:00:00.591) 0:00:23.499 ****** 2026-01-10 14:41:51.238921 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238924 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238928 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238932 | orchestrator | 2026-01-10 14:41:51.238936 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-10 14:41:51.238939 | orchestrator | Saturday 10 January 2026 14:39:30 +0000 (0:00:00.319) 0:00:23.819 ****** 2026-01-10 14:41:51.238943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:41:51.238947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:41:51.238951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:41:51.238955 | orchestrator | 2026-01-10 14:41:51.238958 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-10 14:41:51.238965 | orchestrator | Saturday 10 January 2026 14:39:32 +0000 (0:00:01.633) 0:00:25.453 ****** 2026-01-10 14:41:51.238969 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:41:51.238973 | orchestrator | 2026-01-10 14:41:51.238976 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-10 14:41:51.238980 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:00.917) 0:00:26.371 ****** 2026-01-10 14:41:51.238984 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.238988 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.238991 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.238995 | orchestrator | 2026-01-10 14:41:51.238999 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-10 14:41:51.239003 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:00.775) 0:00:27.146 ****** 2026-01-10 14:41:51.239006 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:41:51.239010 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:41:51.239014 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:41:51.239018 | orchestrator | 2026-01-10 14:41:51.239021 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-10 14:41:51.239025 | orchestrator | Saturday 10 January 2026 14:39:35 +0000 (0:00:01.068) 0:00:28.214 ****** 2026-01-10 14:41:51.239029 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.239033 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:51.239038 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:51.239044 | orchestrator | 2026-01-10 14:41:51.239051 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-10 14:41:51.239060 | orchestrator | Saturday 10 January 2026 14:39:35 +0000 (0:00:00.350) 0:00:28.565 ****** 2026-01-10 14:41:51.239067 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:41:51.239073 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:41:51.239078 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:41:51.239085 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:41:51.239091 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:41:51.239097 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:41:51.239102 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:41:51.239108 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:41:51.239114 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:41:51.239125 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:41:51.239131 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:41:51.239137 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:41:51.239142 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:41:51.239148 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:41:51.239158 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:41:51.239164 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:41:51.239170 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:41:51.239176 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:41:51.239182 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:41:51.239188 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:41:51.239194 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:41:51.239199 | orchestrator | 2026-01-10 14:41:51.239205 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-10 14:41:51.239212 | orchestrator | Saturday 10 January 2026 14:39:44 +0000 (0:00:09.257) 0:00:37.822 ****** 2026-01-10 14:41:51.239218 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:41:51.239225 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:41:51.239232 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:41:51.239236 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:41:51.239241 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:41:51.239247 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:41:51.239254 | orchestrator | 2026-01-10 14:41:51.239258 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-10 14:41:51.239265 | orchestrator | Saturday 10 January 2026 14:39:47 +0000 (0:00:03.037) 0:00:40.860 ****** 2026-01-10 14:41:51.239270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.239274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.239286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-10 14:41:51.239292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:41:51.239323 | orchestrator | 2026-01-10 14:41:51.239328 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.239332 | orchestrator | Saturday 10 January 2026 14:39:50 +0000 (0:00:02.455) 0:00:43.315 ****** 2026-01-10 14:41:51.239336 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239340 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.239344 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.239348 | orchestrator | 2026-01-10 14:41:51.239352 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-10 14:41:51.239355 | orchestrator | Saturday 10 January 2026 14:39:50 +0000 (0:00:00.288) 0:00:43.604 ****** 2026-01-10 14:41:51.239359 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239363 | orchestrator | 2026-01-10 14:41:51.239367 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-10 14:41:51.239371 | orchestrator | Saturday 10 January 2026 14:39:53 +0000 (0:00:02.718) 0:00:46.323 ****** 2026-01-10 14:41:51.239374 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239378 | orchestrator | 2026-01-10 14:41:51.239382 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-10 14:41:51.239386 | orchestrator | Saturday 10 January 2026 14:39:55 +0000 (0:00:02.724) 0:00:49.047 ****** 2026-01-10 14:41:51.239389 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:51.239393 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.239397 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:51.239401 | orchestrator | 2026-01-10 14:41:51.239405 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-10 14:41:51.239408 | orchestrator | Saturday 10 January 2026 14:39:56 +0000 (0:00:01.095) 0:00:50.142 ****** 2026-01-10 14:41:51.239412 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.239416 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:51.239420 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:51.239423 | orchestrator | 2026-01-10 14:41:51.239427 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-10 14:41:51.239431 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.321) 0:00:50.464 ****** 2026-01-10 14:41:51.239435 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239439 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.239442 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.239446 | orchestrator | 2026-01-10 14:41:51.239452 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-10 14:41:51.239460 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.341) 0:00:50.805 ****** 2026-01-10 14:41:51.239464 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239469 | orchestrator | 2026-01-10 14:41:51.239473 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-10 14:41:51.239497 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:16.839) 0:01:07.645 ****** 2026-01-10 14:41:51.239503 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239508 | orchestrator | 2026-01-10 14:41:51.239512 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:41:51.239516 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:12.147) 0:01:19.793 ****** 2026-01-10 14:41:51.239521 | orchestrator | 2026-01-10 14:41:51.239525 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:41:51.239529 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.065) 0:01:19.858 ****** 2026-01-10 14:41:51.239533 | orchestrator | 2026-01-10 14:41:51.239537 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:41:51.239542 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.075) 0:01:19.934 ****** 2026-01-10 14:41:51.239546 | orchestrator | 2026-01-10 14:41:51.239550 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-10 14:41:51.239554 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.070) 0:01:20.004 ****** 2026-01-10 14:41:51.239559 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239563 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:51.239568 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:51.239572 | orchestrator | 2026-01-10 14:41:51.239576 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-10 14:41:51.239580 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:13.010) 0:01:33.015 ****** 2026-01-10 14:41:51.239585 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239589 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:51.239593 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:51.239597 | orchestrator | 2026-01-10 14:41:51.239601 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-10 14:41:51.239606 | orchestrator | Saturday 10 January 2026 14:40:45 +0000 (0:00:05.245) 0:01:38.260 ****** 2026-01-10 14:41:51.239611 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239615 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:51.239619 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:51.239624 | orchestrator | 2026-01-10 14:41:51.239628 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.239632 | orchestrator | Saturday 10 January 2026 14:40:52 +0000 (0:00:07.177) 0:01:45.438 ****** 2026-01-10 14:41:51.239637 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:51.239641 | orchestrator | 2026-01-10 14:41:51.239646 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-10 14:41:51.239650 | orchestrator | Saturday 10 January 2026 14:40:53 +0000 (0:00:00.835) 0:01:46.274 ****** 2026-01-10 14:41:51.239654 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:51.239658 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.239661 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:51.239665 | orchestrator | 2026-01-10 14:41:51.239669 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-10 14:41:51.239673 | orchestrator | Saturday 10 January 2026 14:40:53 +0000 (0:00:00.876) 0:01:47.151 ****** 2026-01-10 14:41:51.239676 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:51.239680 | orchestrator | 2026-01-10 14:41:51.239684 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-10 14:41:51.239688 | orchestrator | Saturday 10 January 2026 14:40:55 +0000 (0:00:01.553) 0:01:48.705 ****** 2026-01-10 14:41:51.239694 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-10 14:41:51.239702 | orchestrator | 2026-01-10 14:41:51.239706 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-10 14:41:51.239710 | orchestrator | Saturday 10 January 2026 14:41:06 +0000 (0:00:11.394) 0:02:00.099 ****** 2026-01-10 14:41:51.239713 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-10 14:41:51.239717 | orchestrator | 2026-01-10 14:41:51.239721 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-10 14:41:51.239728 | orchestrator | Saturday 10 January 2026 14:41:35 +0000 (0:00:28.323) 0:02:28.422 ****** 2026-01-10 14:41:51.239735 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-10 14:41:51.239739 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-10 14:41:51.239743 | orchestrator | 2026-01-10 14:41:51.239747 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-10 14:41:51.239751 | orchestrator | Saturday 10 January 2026 14:41:43 +0000 (0:00:08.615) 0:02:37.038 ****** 2026-01-10 14:41:51.239754 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239758 | orchestrator | 2026-01-10 14:41:51.239762 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-10 14:41:51.239766 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.136) 0:02:37.175 ****** 2026-01-10 14:41:51.239769 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239773 | orchestrator | 2026-01-10 14:41:51.239777 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-10 14:41:51.239781 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.134) 0:02:37.310 ****** 2026-01-10 14:41:51.239784 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239788 | orchestrator | 2026-01-10 14:41:51.239792 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-10 14:41:51.239795 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.127) 0:02:37.437 ****** 2026-01-10 14:41:51.239799 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239803 | orchestrator | 2026-01-10 14:41:51.239810 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-10 14:41:51.239814 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.482) 0:02:37.920 ****** 2026-01-10 14:41:51.239817 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:51.239821 | orchestrator | 2026-01-10 14:41:51.239825 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:41:51.239828 | orchestrator | Saturday 10 January 2026 14:41:47 +0000 (0:00:03.176) 0:02:41.096 ****** 2026-01-10 14:41:51.239832 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:51.239836 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:51.239839 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:51.239843 | orchestrator | 2026-01-10 14:41:51.239847 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:41:51.239851 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:41:51.239856 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:41:51.239860 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:41:51.239863 | orchestrator | 2026-01-10 14:41:51.239867 | orchestrator | 2026-01-10 14:41:51.239871 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:41:51.239875 | orchestrator | Saturday 10 January 2026 14:41:48 +0000 (0:00:00.423) 0:02:41.520 ****** 2026-01-10 14:41:51.239878 | orchestrator | =============================================================================== 2026-01-10 14:41:51.239882 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.32s 2026-01-10 14:41:51.239889 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.84s 2026-01-10 14:41:51.239893 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.01s 2026-01-10 14:41:51.239897 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.15s 2026-01-10 14:41:51.239900 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.39s 2026-01-10 14:41:51.239904 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.26s 2026-01-10 14:41:51.239908 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.62s 2026-01-10 14:41:51.239911 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.18s 2026-01-10 14:41:51.239915 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.25s 2026-01-10 14:41:51.239919 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.21s 2026-01-10 14:41:51.239922 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.63s 2026-01-10 14:41:51.239926 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.50s 2026-01-10 14:41:51.239930 | orchestrator | keystone : Creating default user role ----------------------------------- 3.18s 2026-01-10 14:41:51.239934 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.04s 2026-01-10 14:41:51.239937 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.72s 2026-01-10 14:41:51.239941 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.72s 2026-01-10 14:41:51.239945 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2026-01-10 14:41:51.239951 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.98s 2026-01-10 14:41:51.239954 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.63s 2026-01-10 14:41:51.239958 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.55s 2026-01-10 14:41:51.239962 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:41:51.239966 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:41:51.239970 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:41:51.239973 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:41:51.239977 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:41:51.239981 | orchestrator | 2026-01-10 14:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:54.255926 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:41:54.256058 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:41:54.256993 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:41:54.257335 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:41:54.257941 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:41:54.257972 | orchestrator | 2026-01-10 14:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:57.285135 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:41:57.287790 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:41:57.287860 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:41:57.290126 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:41:57.291880 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:41:57.291934 | orchestrator | 2026-01-10 14:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:00.333295 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:00.334872 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:00.347204 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:00.348460 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:00.349626 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:00.349649 | orchestrator | 2026-01-10 14:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:03.399835 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:03.399886 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:03.402287 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:03.403316 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:03.406861 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:03.406905 | orchestrator | 2026-01-10 14:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:06.461964 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:06.463618 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:06.465945 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:06.467972 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:06.468970 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:06.469023 | orchestrator | 2026-01-10 14:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:09.509309 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:09.510622 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:09.512394 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:09.514143 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:09.516092 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:09.516129 | orchestrator | 2026-01-10 14:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:12.572629 | orchestrator | 2026-01-10 14:42:12 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:12.573519 | orchestrator | 2026-01-10 14:42:12 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:12.574198 | orchestrator | 2026-01-10 14:42:12 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:12.575195 | orchestrator | 2026-01-10 14:42:12 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:12.576420 | orchestrator | 2026-01-10 14:42:12 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:12.576639 | orchestrator | 2026-01-10 14:42:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:15.624100 | orchestrator | 2026-01-10 14:42:15 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:15.627450 | orchestrator | 2026-01-10 14:42:15 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:15.629404 | orchestrator | 2026-01-10 14:42:15 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:15.631376 | orchestrator | 2026-01-10 14:42:15 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:15.632877 | orchestrator | 2026-01-10 14:42:15 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:15.632937 | orchestrator | 2026-01-10 14:42:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:18.682656 | orchestrator | 2026-01-10 14:42:18 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:18.686759 | orchestrator | 2026-01-10 14:42:18 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:18.689715 | orchestrator | 2026-01-10 14:42:18 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:18.691926 | orchestrator | 2026-01-10 14:42:18 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:18.693896 | orchestrator | 2026-01-10 14:42:18 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:18.693944 | orchestrator | 2026-01-10 14:42:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:21.751292 | orchestrator | 2026-01-10 14:42:21 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:21.752107 | orchestrator | 2026-01-10 14:42:21 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:21.754537 | orchestrator | 2026-01-10 14:42:21 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:21.756137 | orchestrator | 2026-01-10 14:42:21 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:21.758372 | orchestrator | 2026-01-10 14:42:21 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:21.758711 | orchestrator | 2026-01-10 14:42:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:24.820149 | orchestrator | 2026-01-10 14:42:24 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:24.820818 | orchestrator | 2026-01-10 14:42:24 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:24.822181 | orchestrator | 2026-01-10 14:42:24 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:24.823708 | orchestrator | 2026-01-10 14:42:24 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:24.825240 | orchestrator | 2026-01-10 14:42:24 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:24.825326 | orchestrator | 2026-01-10 14:42:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:27.875937 | orchestrator | 2026-01-10 14:42:27 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:27.877041 | orchestrator | 2026-01-10 14:42:27 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:27.878720 | orchestrator | 2026-01-10 14:42:27 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:27.880742 | orchestrator | 2026-01-10 14:42:27 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:27.882610 | orchestrator | 2026-01-10 14:42:27 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:27.883245 | orchestrator | 2026-01-10 14:42:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:30.929305 | orchestrator | 2026-01-10 14:42:30 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:30.929398 | orchestrator | 2026-01-10 14:42:30 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:30.930128 | orchestrator | 2026-01-10 14:42:30 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:30.930944 | orchestrator | 2026-01-10 14:42:30 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:30.931813 | orchestrator | 2026-01-10 14:42:30 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state STARTED 2026-01-10 14:42:30.931846 | orchestrator | 2026-01-10 14:42:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:34.022925 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:34.025258 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:34.025723 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:34.028086 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:34.028830 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:34.029680 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task 1ed5a7d0-6dad-49c1-8e89-57d6c16f5d44 is in state SUCCESS 2026-01-10 14:42:34.030769 | orchestrator | 2026-01-10 14:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:37.084781 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:37.084858 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:37.087986 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:37.088044 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:37.091700 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:37.091762 | orchestrator | 2026-01-10 14:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:40.119460 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:40.119771 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:40.120807 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:40.121594 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:40.122519 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:40.127807 | orchestrator | 2026-01-10 14:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:43.167218 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:43.167540 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:43.169147 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state STARTED 2026-01-10 14:42:43.171702 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:43.172804 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:43.172866 | orchestrator | 2026-01-10 14:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:46.259381 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:46.259855 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:46.265569 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task 7f839770-aeb7-4b66-b8f4-a0371816afb2 is in state SUCCESS 2026-01-10 14:42:46.265636 | orchestrator | 2026-01-10 14:42:46.265644 | orchestrator | 2026-01-10 14:42:46.265651 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-10 14:42:46.265659 | orchestrator | 2026-01-10 14:42:46.265665 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-10 14:42:46.265672 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.082) 0:00:00.082 ****** 2026-01-10 14:42:46.265679 | orchestrator | changed: [localhost] 2026-01-10 14:42:46.265686 | orchestrator | 2026-01-10 14:42:46.265692 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-10 14:42:46.265699 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.898) 0:00:00.981 ****** 2026-01-10 14:42:46.265705 | orchestrator | changed: [localhost] 2026-01-10 14:42:46.265711 | orchestrator | 2026-01-10 14:42:46.265731 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-10 14:42:46.265738 | orchestrator | Saturday 10 January 2026 14:42:25 +0000 (0:00:31.248) 0:00:32.229 ****** 2026-01-10 14:42:46.265744 | orchestrator | changed: [localhost] 2026-01-10 14:42:46.265750 | orchestrator | 2026-01-10 14:42:46.265756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:42:46.265762 | orchestrator | 2026-01-10 14:42:46.265768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:42:46.265774 | orchestrator | Saturday 10 January 2026 14:42:30 +0000 (0:00:04.378) 0:00:36.608 ****** 2026-01-10 14:42:46.265780 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:46.265787 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:46.265810 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:46.265817 | orchestrator | 2026-01-10 14:42:46.265823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:42:46.265829 | orchestrator | Saturday 10 January 2026 14:42:30 +0000 (0:00:00.335) 0:00:36.944 ****** 2026-01-10 14:42:46.265836 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-10 14:42:46.265842 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-10 14:42:46.265867 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-10 14:42:46.265874 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-10 14:42:46.265880 | orchestrator | 2026-01-10 14:42:46.265887 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-10 14:42:46.265893 | orchestrator | skipping: no hosts matched 2026-01-10 14:42:46.265900 | orchestrator | 2026-01-10 14:42:46.265906 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:42:46.265913 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:42:46.265921 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:42:46.265929 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:42:46.265935 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:42:46.265952 | orchestrator | 2026-01-10 14:42:46.265958 | orchestrator | 2026-01-10 14:42:46.265965 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:42:46.265979 | orchestrator | Saturday 10 January 2026 14:42:31 +0000 (0:00:00.630) 0:00:37.575 ****** 2026-01-10 14:42:46.265985 | orchestrator | =============================================================================== 2026-01-10 14:42:46.265991 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.25s 2026-01-10 14:42:46.265998 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.38s 2026-01-10 14:42:46.266004 | orchestrator | Ensure the destination directory exists --------------------------------- 0.90s 2026-01-10 14:42:46.266010 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-01-10 14:42:46.266055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-10 14:42:46.266063 | orchestrator | 2026-01-10 14:42:46.266069 | orchestrator | 2026-01-10 14:42:46.266075 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-10 14:42:46.266081 | orchestrator | 2026-01-10 14:42:46.266087 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-10 14:42:46.266093 | orchestrator | Saturday 10 January 2026 14:41:48 +0000 (0:00:00.231) 0:00:00.231 ****** 2026-01-10 14:42:46.266099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-10 14:42:46.266106 | orchestrator | 2026-01-10 14:42:46.266112 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-10 14:42:46.266118 | orchestrator | Saturday 10 January 2026 14:41:48 +0000 (0:00:00.223) 0:00:00.455 ****** 2026-01-10 14:42:46.266124 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-10 14:42:46.266131 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-10 14:42:46.266138 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-10 14:42:46.266144 | orchestrator | 2026-01-10 14:42:46.266152 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-10 14:42:46.266158 | orchestrator | Saturday 10 January 2026 14:41:49 +0000 (0:00:01.511) 0:00:01.966 ****** 2026-01-10 14:42:46.266165 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-10 14:42:46.266172 | orchestrator | 2026-01-10 14:42:46.266192 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-10 14:42:46.266199 | orchestrator | Saturday 10 January 2026 14:41:51 +0000 (0:00:01.730) 0:00:03.696 ****** 2026-01-10 14:42:46.266206 | orchestrator | changed: [testbed-manager] 2026-01-10 14:42:46.266213 | orchestrator | 2026-01-10 14:42:46.266220 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-10 14:42:46.266234 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:01.551) 0:00:05.248 ****** 2026-01-10 14:42:46.266241 | orchestrator | changed: [testbed-manager] 2026-01-10 14:42:46.266249 | orchestrator | 2026-01-10 14:42:46.266255 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-10 14:42:46.266263 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.689) 0:00:05.937 ****** 2026-01-10 14:42:46.266270 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-10 14:42:46.266277 | orchestrator | ok: [testbed-manager] 2026-01-10 14:42:46.266288 | orchestrator | 2026-01-10 14:42:46.266305 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-10 14:42:46.266315 | orchestrator | Saturday 10 January 2026 14:42:34 +0000 (0:00:40.208) 0:00:46.146 ****** 2026-01-10 14:42:46.266325 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-10 14:42:46.266335 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-10 14:42:46.266347 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-10 14:42:46.266358 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:42:46.266369 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-10 14:42:46.266380 | orchestrator | 2026-01-10 14:42:46.266392 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-10 14:42:46.266403 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:04.961) 0:00:51.107 ****** 2026-01-10 14:42:46.266413 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-10 14:42:46.266423 | orchestrator | 2026-01-10 14:42:46.266454 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-10 14:42:46.266464 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.344) 0:00:51.452 ****** 2026-01-10 14:42:46.266475 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:42:46.266486 | orchestrator | 2026-01-10 14:42:46.266497 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-10 14:42:46.266508 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.101) 0:00:51.554 ****** 2026-01-10 14:42:46.266518 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:42:46.266529 | orchestrator | 2026-01-10 14:42:46.266535 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-10 14:42:46.266542 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.365) 0:00:51.919 ****** 2026-01-10 14:42:46.266548 | orchestrator | changed: [testbed-manager] 2026-01-10 14:42:46.266554 | orchestrator | 2026-01-10 14:42:46.266560 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-10 14:42:46.266567 | orchestrator | Saturday 10 January 2026 14:42:41 +0000 (0:00:01.572) 0:00:53.492 ****** 2026-01-10 14:42:46.266573 | orchestrator | changed: [testbed-manager] 2026-01-10 14:42:46.266579 | orchestrator | 2026-01-10 14:42:46.266585 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-10 14:42:46.266591 | orchestrator | Saturday 10 January 2026 14:42:42 +0000 (0:00:00.929) 0:00:54.421 ****** 2026-01-10 14:42:46.266597 | orchestrator | changed: [testbed-manager] 2026-01-10 14:42:46.266604 | orchestrator | 2026-01-10 14:42:46.266610 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-10 14:42:46.266616 | orchestrator | Saturday 10 January 2026 14:42:43 +0000 (0:00:00.583) 0:00:55.005 ****** 2026-01-10 14:42:46.266622 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-10 14:42:46.266628 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-10 14:42:46.266634 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:42:46.266641 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-10 14:42:46.266647 | orchestrator | 2026-01-10 14:42:46.266653 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:42:46.266659 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:42:46.266672 | orchestrator | 2026-01-10 14:42:46.266679 | orchestrator | 2026-01-10 14:42:46.266685 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:42:46.266691 | orchestrator | Saturday 10 January 2026 14:42:45 +0000 (0:00:02.031) 0:00:57.036 ****** 2026-01-10 14:42:46.266697 | orchestrator | =============================================================================== 2026-01-10 14:42:46.266703 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.21s 2026-01-10 14:42:46.266709 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.96s 2026-01-10 14:42:46.266715 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.03s 2026-01-10 14:42:46.266721 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.73s 2026-01-10 14:42:46.266728 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.57s 2026-01-10 14:42:46.266734 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.55s 2026-01-10 14:42:46.266740 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.51s 2026-01-10 14:42:46.266746 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.93s 2026-01-10 14:42:46.266752 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.69s 2026-01-10 14:42:46.266758 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-01-10 14:42:46.266764 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.37s 2026-01-10 14:42:46.266776 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.34s 2026-01-10 14:42:46.266783 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-01-10 14:42:46.266789 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2026-01-10 14:42:46.270259 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:46.272863 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:46.272945 | orchestrator | 2026-01-10 14:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:49.336900 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:49.340074 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:49.340670 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:49.341698 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:42:49.342158 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:49.342195 | orchestrator | 2026-01-10 14:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:52.388955 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:52.389479 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:52.389990 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:52.390876 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:42:52.391418 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:52.391510 | orchestrator | 2026-01-10 14:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:55.445939 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:55.447238 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:55.447764 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:55.448495 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:42:55.449004 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:55.449087 | orchestrator | 2026-01-10 14:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:58.519510 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:42:58.519656 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:42:58.519715 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:42:58.520370 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:42:58.520946 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:42:58.520974 | orchestrator | 2026-01-10 14:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:01.555214 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:01.555316 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:01.556082 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:01.557026 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:01.558073 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:01.558112 | orchestrator | 2026-01-10 14:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:04.578892 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:04.579105 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:04.579488 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:04.580149 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:04.580687 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:04.580733 | orchestrator | 2026-01-10 14:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:07.615683 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:07.615949 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:07.616742 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:07.617727 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:07.618342 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:07.618373 | orchestrator | 2026-01-10 14:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:10.647483 | orchestrator | 2026-01-10 14:43:10 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:10.648114 | orchestrator | 2026-01-10 14:43:10 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:10.648323 | orchestrator | 2026-01-10 14:43:10 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:10.649232 | orchestrator | 2026-01-10 14:43:10 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:10.650094 | orchestrator | 2026-01-10 14:43:10 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:10.650519 | orchestrator | 2026-01-10 14:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:13.678955 | orchestrator | 2026-01-10 14:43:13 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:13.679592 | orchestrator | 2026-01-10 14:43:13 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:13.680436 | orchestrator | 2026-01-10 14:43:13 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:13.682185 | orchestrator | 2026-01-10 14:43:13 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:13.683239 | orchestrator | 2026-01-10 14:43:13 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:13.683282 | orchestrator | 2026-01-10 14:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:16.706384 | orchestrator | 2026-01-10 14:43:16 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:16.706614 | orchestrator | 2026-01-10 14:43:16 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:16.707320 | orchestrator | 2026-01-10 14:43:16 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:16.707816 | orchestrator | 2026-01-10 14:43:16 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:16.708645 | orchestrator | 2026-01-10 14:43:16 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:16.708687 | orchestrator | 2026-01-10 14:43:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:19.735557 | orchestrator | 2026-01-10 14:43:19 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:19.736250 | orchestrator | 2026-01-10 14:43:19 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:19.737050 | orchestrator | 2026-01-10 14:43:19 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:19.738159 | orchestrator | 2026-01-10 14:43:19 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:19.738737 | orchestrator | 2026-01-10 14:43:19 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:19.738766 | orchestrator | 2026-01-10 14:43:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:22.762096 | orchestrator | 2026-01-10 14:43:22 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:22.762187 | orchestrator | 2026-01-10 14:43:22 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:22.762866 | orchestrator | 2026-01-10 14:43:22 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:22.763647 | orchestrator | 2026-01-10 14:43:22 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:22.764234 | orchestrator | 2026-01-10 14:43:22 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:22.764291 | orchestrator | 2026-01-10 14:43:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:25.789049 | orchestrator | 2026-01-10 14:43:25 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:25.789376 | orchestrator | 2026-01-10 14:43:25 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:25.790236 | orchestrator | 2026-01-10 14:43:25 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:25.790856 | orchestrator | 2026-01-10 14:43:25 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:25.791615 | orchestrator | 2026-01-10 14:43:25 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:25.791657 | orchestrator | 2026-01-10 14:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:28.812288 | orchestrator | 2026-01-10 14:43:28 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:28.812621 | orchestrator | 2026-01-10 14:43:28 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:28.813560 | orchestrator | 2026-01-10 14:43:28 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:28.814105 | orchestrator | 2026-01-10 14:43:28 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:28.814634 | orchestrator | 2026-01-10 14:43:28 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:28.814722 | orchestrator | 2026-01-10 14:43:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:31.836640 | orchestrator | 2026-01-10 14:43:31 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:31.836839 | orchestrator | 2026-01-10 14:43:31 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:31.837755 | orchestrator | 2026-01-10 14:43:31 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:31.839183 | orchestrator | 2026-01-10 14:43:31 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:31.840002 | orchestrator | 2026-01-10 14:43:31 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:31.840068 | orchestrator | 2026-01-10 14:43:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:34.867313 | orchestrator | 2026-01-10 14:43:34 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:34.868752 | orchestrator | 2026-01-10 14:43:34 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:34.870619 | orchestrator | 2026-01-10 14:43:34 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:34.872266 | orchestrator | 2026-01-10 14:43:34 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:34.873227 | orchestrator | 2026-01-10 14:43:34 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:34.873339 | orchestrator | 2026-01-10 14:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:37.908566 | orchestrator | 2026-01-10 14:43:37 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:37.909438 | orchestrator | 2026-01-10 14:43:37 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:37.909722 | orchestrator | 2026-01-10 14:43:37 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:37.910830 | orchestrator | 2026-01-10 14:43:37 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:37.913702 | orchestrator | 2026-01-10 14:43:37 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:37.913773 | orchestrator | 2026-01-10 14:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:40.948063 | orchestrator | 2026-01-10 14:43:40 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:40.948517 | orchestrator | 2026-01-10 14:43:40 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:40.949329 | orchestrator | 2026-01-10 14:43:40 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:40.950198 | orchestrator | 2026-01-10 14:43:40 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:40.951177 | orchestrator | 2026-01-10 14:43:40 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:40.951209 | orchestrator | 2026-01-10 14:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:43.982743 | orchestrator | 2026-01-10 14:43:43 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:43.982898 | orchestrator | 2026-01-10 14:43:43 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:43.983704 | orchestrator | 2026-01-10 14:43:43 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:43.984585 | orchestrator | 2026-01-10 14:43:43 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:43.985301 | orchestrator | 2026-01-10 14:43:43 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:43.985360 | orchestrator | 2026-01-10 14:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:47.028890 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:47.029442 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:47.033699 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:47.046545 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:47.048989 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:47.049058 | orchestrator | 2026-01-10 14:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:50.093445 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:50.093493 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:50.093560 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:50.094570 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:50.095820 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:50.095898 | orchestrator | 2026-01-10 14:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:53.137119 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:53.140399 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:53.142897 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:53.144720 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:53.146906 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:53.147276 | orchestrator | 2026-01-10 14:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:56.183434 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:56.185013 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:56.185870 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:56.187178 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:56.188153 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:56.188513 | orchestrator | 2026-01-10 14:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:59.255637 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:43:59.256276 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:43:59.257465 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:43:59.258439 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:43:59.259516 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state STARTED 2026-01-10 14:43:59.259552 | orchestrator | 2026-01-10 14:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:02.298403 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:02.298746 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:44:02.299692 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:02.300326 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:02.301166 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 551b9a01-8d3f-4103-b61b-5ffc2eaf5861 is in state SUCCESS 2026-01-10 14:44:02.301226 | orchestrator | 2026-01-10 14:44:02.303804 | orchestrator | 2026-01-10 14:44:02.303864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:44:02.303871 | orchestrator | 2026-01-10 14:44:02.303876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:44:02.303882 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:00.244) 0:00:00.244 ****** 2026-01-10 14:44:02.303886 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:02.303891 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:02.303913 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:02.303917 | orchestrator | 2026-01-10 14:44:02.303921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:44:02.303925 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.335) 0:00:00.579 ****** 2026-01-10 14:44:02.303930 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-10 14:44:02.303934 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-10 14:44:02.303938 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-10 14:44:02.303942 | orchestrator | 2026-01-10 14:44:02.303946 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-10 14:44:02.303949 | orchestrator | 2026-01-10 14:44:02.303953 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:44:02.303958 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:00.528) 0:00:01.108 ****** 2026-01-10 14:44:02.303962 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:02.303967 | orchestrator | 2026-01-10 14:44:02.303971 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-10 14:44:02.303975 | orchestrator | Saturday 10 January 2026 14:42:40 +0000 (0:00:01.169) 0:00:02.277 ****** 2026-01-10 14:44:02.303979 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-10 14:44:02.303983 | orchestrator | 2026-01-10 14:44:02.303987 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-10 14:44:02.303991 | orchestrator | Saturday 10 January 2026 14:42:45 +0000 (0:00:04.780) 0:00:07.057 ****** 2026-01-10 14:44:02.303994 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-10 14:44:02.303999 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-10 14:44:02.304002 | orchestrator | 2026-01-10 14:44:02.304006 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-10 14:44:02.304010 | orchestrator | Saturday 10 January 2026 14:42:53 +0000 (0:00:07.918) 0:00:14.976 ****** 2026-01-10 14:44:02.304014 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:44:02.304018 | orchestrator | 2026-01-10 14:44:02.304022 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-10 14:44:02.304025 | orchestrator | Saturday 10 January 2026 14:42:57 +0000 (0:00:04.185) 0:00:19.162 ****** 2026-01-10 14:44:02.304029 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:44:02.304033 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-10 14:44:02.304036 | orchestrator | 2026-01-10 14:44:02.304040 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-10 14:44:02.304044 | orchestrator | Saturday 10 January 2026 14:43:02 +0000 (0:00:04.699) 0:00:23.862 ****** 2026-01-10 14:44:02.304048 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:44:02.304054 | orchestrator | 2026-01-10 14:44:02.304060 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-10 14:44:02.304066 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:04.787) 0:00:28.649 ****** 2026-01-10 14:44:02.304074 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-10 14:44:02.304082 | orchestrator | 2026-01-10 14:44:02.304089 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:44:02.304095 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:04.981) 0:00:33.630 ****** 2026-01-10 14:44:02.304101 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304107 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:02.304113 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:02.304119 | orchestrator | 2026-01-10 14:44:02.304125 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-10 14:44:02.304131 | orchestrator | Saturday 10 January 2026 14:43:13 +0000 (0:00:00.791) 0:00:34.421 ****** 2026-01-10 14:44:02.304196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304235 | orchestrator | 2026-01-10 14:44:02.304242 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-10 14:44:02.304248 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:01.522) 0:00:35.944 ****** 2026-01-10 14:44:02.304254 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304261 | orchestrator | 2026-01-10 14:44:02.304265 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-10 14:44:02.304269 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:00.221) 0:00:36.166 ****** 2026-01-10 14:44:02.304273 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304278 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:02.304284 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:02.304290 | orchestrator | 2026-01-10 14:44:02.304295 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:44:02.304302 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.882) 0:00:37.048 ****** 2026-01-10 14:44:02.304308 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:02.304314 | orchestrator | 2026-01-10 14:44:02.304320 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-10 14:44:02.304332 | orchestrator | Saturday 10 January 2026 14:43:16 +0000 (0:00:01.248) 0:00:38.296 ****** 2026-01-10 14:44:02.304344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304387 | orchestrator | 2026-01-10 14:44:02.304393 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-10 14:44:02.304400 | orchestrator | Saturday 10 January 2026 14:43:18 +0000 (0:00:02.030) 0:00:40.326 ****** 2026-01-10 14:44:02.304407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304418 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304427 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:02.304440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304445 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:02.304449 | orchestrator | 2026-01-10 14:44:02.304454 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-10 14:44:02.304458 | orchestrator | Saturday 10 January 2026 14:43:21 +0000 (0:00:02.054) 0:00:42.381 ****** 2026-01-10 14:44:02.304463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304467 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:02.304488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304492 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:02.304497 | orchestrator | 2026-01-10 14:44:02.304501 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-10 14:44:02.304505 | orchestrator | Saturday 10 January 2026 14:43:21 +0000 (0:00:00.790) 0:00:43.171 ****** 2026-01-10 14:44:02.304516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304534 | orchestrator | 2026-01-10 14:44:02.304539 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-10 14:44:02.304543 | orchestrator | Saturday 10 January 2026 14:43:23 +0000 (0:00:01.588) 0:00:44.760 ****** 2026-01-10 14:44:02.304548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304569 | orchestrator | 2026-01-10 14:44:02.304574 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-10 14:44:02.304578 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:03.140) 0:00:47.901 ****** 2026-01-10 14:44:02.304583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:44:02.304587 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:44:02.304592 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-10 14:44:02.304596 | orchestrator | 2026-01-10 14:44:02.304601 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-10 14:44:02.304605 | orchestrator | Saturday 10 January 2026 14:43:28 +0000 (0:00:01.979) 0:00:49.880 ****** 2026-01-10 14:44:02.304614 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:02.304619 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:02.304624 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:02.304628 | orchestrator | 2026-01-10 14:44:02.304633 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-10 14:44:02.304637 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:01.617) 0:00:51.498 ****** 2026-01-10 14:44:02.304642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304646 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:02.304654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304658 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:02.304668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-10 14:44:02.304675 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:02.304681 | orchestrator | 2026-01-10 14:44:02.304687 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-10 14:44:02.304693 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:00.619) 0:00:52.117 ****** 2026-01-10 14:44:02.304698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:02.304723 | orchestrator | 2026-01-10 14:44:02.304729 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-10 14:44:02.304735 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:01.227) 0:00:53.345 ****** 2026-01-10 14:44:02.304741 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:02.304751 | orchestrator | 2026-01-10 14:44:02.304755 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-10 14:44:02.304759 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:03.799) 0:00:57.145 ****** 2026-01-10 14:44:02.304762 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:02.304766 | orchestrator | 2026-01-10 14:44:02.304770 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-10 14:44:02.304774 | orchestrator | Saturday 10 January 2026 14:43:38 +0000 (0:00:03.144) 0:01:00.289 ****** 2026-01-10 14:44:02.304777 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:02.304781 | orchestrator | 2026-01-10 14:44:02.304785 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:44:02.304789 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:15.557) 0:01:15.846 ****** 2026-01-10 14:44:02.304793 | orchestrator | 2026-01-10 14:44:02.304796 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:44:02.304800 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:00.155) 0:01:16.001 ****** 2026-01-10 14:44:02.304804 | orchestrator | 2026-01-10 14:44:02.304810 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:44:02.304815 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:00.163) 0:01:16.165 ****** 2026-01-10 14:44:02.304822 | orchestrator | 2026-01-10 14:44:02.304826 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-10 14:44:02.304830 | orchestrator | Saturday 10 January 2026 14:43:54 +0000 (0:00:00.090) 0:01:16.256 ****** 2026-01-10 14:44:02.304833 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:02.304837 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:02.304841 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:02.304845 | orchestrator | 2026-01-10 14:44:02.304848 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:44:02.304853 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:44:02.304859 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:44:02.304863 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:44:02.304866 | orchestrator | 2026-01-10 14:44:02.304870 | orchestrator | 2026-01-10 14:44:02.304874 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:44:02.304878 | orchestrator | Saturday 10 January 2026 14:44:00 +0000 (0:00:05.864) 0:01:22.121 ****** 2026-01-10 14:44:02.304881 | orchestrator | =============================================================================== 2026-01-10 14:44:02.304885 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.56s 2026-01-10 14:44:02.304889 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.92s 2026-01-10 14:44:02.304893 | orchestrator | placement : Restart placement-api container ----------------------------- 5.86s 2026-01-10 14:44:02.304897 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.98s 2026-01-10 14:44:02.304900 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.79s 2026-01-10 14:44:02.304904 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.78s 2026-01-10 14:44:02.304908 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.70s 2026-01-10 14:44:02.304912 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.19s 2026-01-10 14:44:02.304915 | orchestrator | placement : Creating placement databases -------------------------------- 3.80s 2026-01-10 14:44:02.304919 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.14s 2026-01-10 14:44:02.304923 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.14s 2026-01-10 14:44:02.304926 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.05s 2026-01-10 14:44:02.304930 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.03s 2026-01-10 14:44:02.304934 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.98s 2026-01-10 14:44:02.304938 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.62s 2026-01-10 14:44:02.304941 | orchestrator | placement : Copying over config.json files for services ----------------- 1.59s 2026-01-10 14:44:02.304945 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.52s 2026-01-10 14:44:02.304949 | orchestrator | placement : include_tasks ----------------------------------------------- 1.25s 2026-01-10 14:44:02.304953 | orchestrator | placement : Check placement containers ---------------------------------- 1.23s 2026-01-10 14:44:02.304956 | orchestrator | placement : include_tasks ----------------------------------------------- 1.17s 2026-01-10 14:44:02.304960 | orchestrator | 2026-01-10 14:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:05.332423 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:05.332940 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:05.334154 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:44:05.336001 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:05.339160 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:05.339220 | orchestrator | 2026-01-10 14:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:08.373163 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:08.375595 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:08.376787 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state STARTED 2026-01-10 14:44:08.379005 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:08.379038 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:08.379046 | orchestrator | 2026-01-10 14:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:11.413055 | orchestrator | 2026-01-10 14:44:11 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:11.415156 | orchestrator | 2026-01-10 14:44:11 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:11.416129 | orchestrator | 2026-01-10 14:44:11 | INFO  | Task 8ae33bb5-2867-49c3-bf3e-59bee146d63e is in state SUCCESS 2026-01-10 14:44:11.417592 | orchestrator | 2026-01-10 14:44:11.417625 | orchestrator | 2026-01-10 14:44:11.417632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:44:11.417640 | orchestrator | 2026-01-10 14:44:11.417647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:44:11.417654 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.241) 0:00:00.241 ****** 2026-01-10 14:44:11.417661 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:11.417668 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:11.417675 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:11.417682 | orchestrator | 2026-01-10 14:44:11.417688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:44:11.417695 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.249) 0:00:00.491 ****** 2026-01-10 14:44:11.417702 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-10 14:44:11.417709 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-10 14:44:11.417715 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-10 14:44:11.417722 | orchestrator | 2026-01-10 14:44:11.417729 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-10 14:44:11.417736 | orchestrator | 2026-01-10 14:44:11.417742 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:44:11.417749 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.445) 0:00:00.937 ****** 2026-01-10 14:44:11.417756 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:11.417763 | orchestrator | 2026-01-10 14:44:11.417770 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-10 14:44:11.417776 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.498) 0:00:01.435 ****** 2026-01-10 14:44:11.417783 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-10 14:44:11.417790 | orchestrator | 2026-01-10 14:44:11.417797 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-10 14:44:11.417849 | orchestrator | Saturday 10 January 2026 14:41:59 +0000 (0:00:03.941) 0:00:05.377 ****** 2026-01-10 14:44:11.417861 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-10 14:44:11.417871 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-10 14:44:11.417881 | orchestrator | 2026-01-10 14:44:11.417927 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-10 14:44:11.417941 | orchestrator | Saturday 10 January 2026 14:42:06 +0000 (0:00:06.736) 0:00:12.113 ****** 2026-01-10 14:44:11.417952 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:44:11.417963 | orchestrator | 2026-01-10 14:44:11.417973 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-10 14:44:11.417984 | orchestrator | Saturday 10 January 2026 14:42:10 +0000 (0:00:04.308) 0:00:16.422 ****** 2026-01-10 14:44:11.417995 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:44:11.418005 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-10 14:44:11.418011 | orchestrator | 2026-01-10 14:44:11.418087 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-10 14:44:11.418098 | orchestrator | Saturday 10 January 2026 14:42:14 +0000 (0:00:04.405) 0:00:20.827 ****** 2026-01-10 14:44:11.418108 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:44:11.418118 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-10 14:44:11.418130 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-10 14:44:11.418140 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-10 14:44:11.418151 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-10 14:44:11.418160 | orchestrator | 2026-01-10 14:44:11.418167 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-10 14:44:11.418186 | orchestrator | Saturday 10 January 2026 14:42:36 +0000 (0:00:21.358) 0:00:42.187 ****** 2026-01-10 14:44:11.418202 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-10 14:44:11.418216 | orchestrator | 2026-01-10 14:44:11.418225 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-10 14:44:11.418235 | orchestrator | Saturday 10 January 2026 14:42:42 +0000 (0:00:05.787) 0:00:47.974 ****** 2026-01-10 14:44:11.418248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418452 | orchestrator | 2026-01-10 14:44:11.418460 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-10 14:44:11.418468 | orchestrator | Saturday 10 January 2026 14:42:44 +0000 (0:00:02.543) 0:00:50.518 ****** 2026-01-10 14:44:11.418475 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-10 14:44:11.418482 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-10 14:44:11.418489 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-10 14:44:11.418496 | orchestrator | 2026-01-10 14:44:11.418503 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-10 14:44:11.418510 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:02.561) 0:00:53.080 ****** 2026-01-10 14:44:11.418517 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.418524 | orchestrator | 2026-01-10 14:44:11.418531 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-10 14:44:11.418538 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:00.252) 0:00:53.332 ****** 2026-01-10 14:44:11.418545 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.418552 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.418560 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.418567 | orchestrator | 2026-01-10 14:44:11.418573 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:44:11.418579 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.697) 0:00:54.030 ****** 2026-01-10 14:44:11.418586 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:11.418592 | orchestrator | 2026-01-10 14:44:11.418598 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-10 14:44:11.418604 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:00.880) 0:00:54.910 ****** 2026-01-10 14:44:11.418615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.418645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.418699 | orchestrator | 2026-01-10 14:44:11.418706 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-10 14:44:11.418713 | orchestrator | Saturday 10 January 2026 14:42:53 +0000 (0:00:04.062) 0:00:58.973 ****** 2026-01-10 14:44:11.418720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418744 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.418760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418781 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.418796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418830 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.418837 | orchestrator | 2026-01-10 14:44:11.418844 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-10 14:44:11.418850 | orchestrator | Saturday 10 January 2026 14:42:53 +0000 (0:00:00.798) 0:00:59.771 ****** 2026-01-10 14:44:11.418862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418898 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.418913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418947 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.418965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.418976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.418998 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.419009 | orchestrator | 2026-01-10 14:44:11.419020 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-10 14:44:11.419031 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:01.693) 0:01:01.465 ****** 2026-01-10 14:44:11.419047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419408 | orchestrator | 2026-01-10 14:44:11.419414 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-10 14:44:11.419421 | orchestrator | Saturday 10 January 2026 14:43:00 +0000 (0:00:05.336) 0:01:06.801 ****** 2026-01-10 14:44:11.419431 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.419441 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:11.419451 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:11.419461 | orchestrator | 2026-01-10 14:44:11.419471 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-10 14:44:11.419483 | orchestrator | Saturday 10 January 2026 14:43:03 +0000 (0:00:02.496) 0:01:09.297 ****** 2026-01-10 14:44:11.419494 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:44:11.419504 | orchestrator | 2026-01-10 14:44:11.419514 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-10 14:44:11.419525 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.953) 0:01:10.251 ****** 2026-01-10 14:44:11.419532 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.419538 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.419544 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.419550 | orchestrator | 2026-01-10 14:44:11.419556 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-10 14:44:11.419562 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.411) 0:01:10.663 ****** 2026-01-10 14:44:11.419569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419648 | orchestrator | 2026-01-10 14:44:11.419654 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-10 14:44:11.419661 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:10.475) 0:01:21.138 ****** 2026-01-10 14:44:11.419671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.419678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419694 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.419703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.419710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-10 14:44:11.419720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:44:11.419749 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.419756 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.419762 | orchestrator | 2026-01-10 14:44:11.419768 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-10 14:44:11.419774 | orchestrator | Saturday 10 January 2026 14:43:16 +0000 (0:00:01.193) 0:01:22.331 ****** 2026-01-10 14:44:11.419784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-10 14:44:11.419811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:44:11.419863 | orchestrator | 2026-01-10 14:44:11.419869 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:44:11.419875 | orchestrator | Saturday 10 January 2026 14:43:20 +0000 (0:00:04.385) 0:01:26.716 ****** 2026-01-10 14:44:11.419881 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:11.419888 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:11.419894 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:11.419900 | orchestrator | 2026-01-10 14:44:11.419906 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-10 14:44:11.419912 | orchestrator | Saturday 10 January 2026 14:43:21 +0000 (0:00:00.585) 0:01:27.302 ****** 2026-01-10 14:44:11.419918 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.419924 | orchestrator | 2026-01-10 14:44:11.419931 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-10 14:44:11.419938 | orchestrator | Saturday 10 January 2026 14:43:24 +0000 (0:00:02.692) 0:01:29.994 ****** 2026-01-10 14:44:11.419945 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.419952 | orchestrator | 2026-01-10 14:44:11.419958 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-10 14:44:11.419965 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:02.996) 0:01:32.995 ****** 2026-01-10 14:44:11.419972 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.419979 | orchestrator | 2026-01-10 14:44:11.419986 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:44:11.419993 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:13.661) 0:01:46.656 ****** 2026-01-10 14:44:11.420000 | orchestrator | 2026-01-10 14:44:11.420007 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:44:11.420015 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.063) 0:01:46.719 ****** 2026-01-10 14:44:11.420022 | orchestrator | 2026-01-10 14:44:11.420029 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:44:11.420036 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.067) 0:01:46.787 ****** 2026-01-10 14:44:11.420043 | orchestrator | 2026-01-10 14:44:11.420050 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-10 14:44:11.420057 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:00.071) 0:01:46.858 ****** 2026-01-10 14:44:11.420064 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:11.420071 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:11.420078 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.420085 | orchestrator | 2026-01-10 14:44:11.420092 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-10 14:44:11.420101 | orchestrator | Saturday 10 January 2026 14:43:50 +0000 (0:00:09.575) 0:01:56.433 ****** 2026-01-10 14:44:11.420109 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:11.420116 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:11.420123 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.420130 | orchestrator | 2026-01-10 14:44:11.420136 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-10 14:44:11.420144 | orchestrator | Saturday 10 January 2026 14:43:59 +0000 (0:00:08.825) 0:02:05.259 ****** 2026-01-10 14:44:11.420151 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:11.420157 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:11.420165 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:11.420172 | orchestrator | 2026-01-10 14:44:11.420179 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:44:11.420187 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:44:11.420198 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:44:11.420204 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:44:11.420210 | orchestrator | 2026-01-10 14:44:11.420217 | orchestrator | 2026-01-10 14:44:11.420223 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:44:11.420229 | orchestrator | Saturday 10 January 2026 14:44:09 +0000 (0:00:10.318) 0:02:15.578 ****** 2026-01-10 14:44:11.420235 | orchestrator | =============================================================================== 2026-01-10 14:44:11.420241 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 21.36s 2026-01-10 14:44:11.420251 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.66s 2026-01-10 14:44:11.420257 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.48s 2026-01-10 14:44:11.420263 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.32s 2026-01-10 14:44:11.420269 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.58s 2026-01-10 14:44:11.420275 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.83s 2026-01-10 14:44:11.420281 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.74s 2026-01-10 14:44:11.420287 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.79s 2026-01-10 14:44:11.420294 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.34s 2026-01-10 14:44:11.420300 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.41s 2026-01-10 14:44:11.420306 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.39s 2026-01-10 14:44:11.420312 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.31s 2026-01-10 14:44:11.420318 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.06s 2026-01-10 14:44:11.420325 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.94s 2026-01-10 14:44:11.420331 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 3.00s 2026-01-10 14:44:11.420337 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.69s 2026-01-10 14:44:11.420343 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.56s 2026-01-10 14:44:11.420365 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.54s 2026-01-10 14:44:11.420371 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.50s 2026-01-10 14:44:11.420378 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.69s 2026-01-10 14:44:11.420384 | orchestrator | 2026-01-10 14:44:11 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:11.420390 | orchestrator | 2026-01-10 14:44:11 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:11.420397 | orchestrator | 2026-01-10 14:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:14.450824 | orchestrator | 2026-01-10 14:44:14 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:14.451052 | orchestrator | 2026-01-10 14:44:14 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:14.451554 | orchestrator | 2026-01-10 14:44:14 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:14.452150 | orchestrator | 2026-01-10 14:44:14 | INFO  | Task 7d1953dd-0c2f-4f68-8549-7120e3880c46 is in state STARTED 2026-01-10 14:44:14.452595 | orchestrator | 2026-01-10 14:44:14 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:14.452703 | orchestrator | 2026-01-10 14:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:17.484703 | orchestrator | 2026-01-10 14:44:17 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:17.484868 | orchestrator | 2026-01-10 14:44:17 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:17.485644 | orchestrator | 2026-01-10 14:44:17 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:17.486370 | orchestrator | 2026-01-10 14:44:17 | INFO  | Task 7d1953dd-0c2f-4f68-8549-7120e3880c46 is in state STARTED 2026-01-10 14:44:17.487084 | orchestrator | 2026-01-10 14:44:17 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:17.487126 | orchestrator | 2026-01-10 14:44:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:20.518297 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:20.518843 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:20.519717 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:20.520725 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:20.521449 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task 7d1953dd-0c2f-4f68-8549-7120e3880c46 is in state SUCCESS 2026-01-10 14:44:20.522293 | orchestrator | 2026-01-10 14:44:20 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:20.522405 | orchestrator | 2026-01-10 14:44:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:23.564714 | orchestrator | 2026-01-10 14:44:23 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:23.565142 | orchestrator | 2026-01-10 14:44:23 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:23.565995 | orchestrator | 2026-01-10 14:44:23 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:23.567016 | orchestrator | 2026-01-10 14:44:23 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:23.568507 | orchestrator | 2026-01-10 14:44:23 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:23.568545 | orchestrator | 2026-01-10 14:44:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:26.620733 | orchestrator | 2026-01-10 14:44:26 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:26.622226 | orchestrator | 2026-01-10 14:44:26 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:26.623459 | orchestrator | 2026-01-10 14:44:26 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:26.624681 | orchestrator | 2026-01-10 14:44:26 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:26.625860 | orchestrator | 2026-01-10 14:44:26 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state STARTED 2026-01-10 14:44:26.625972 | orchestrator | 2026-01-10 14:44:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:29.671657 | orchestrator | 2026-01-10 14:44:29 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:29.671716 | orchestrator | 2026-01-10 14:44:29 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:29.672559 | orchestrator | 2026-01-10 14:44:29 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:29.673603 | orchestrator | 2026-01-10 14:44:29 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:29.674419 | orchestrator | 2026-01-10 14:44:29 | INFO  | Task 7c0358e6-0ef5-4e80-839b-bee284e13b7d is in state SUCCESS 2026-01-10 14:44:29.674758 | orchestrator | 2026-01-10 14:44:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:32.719870 | orchestrator | 2026-01-10 14:44:32 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:32.720691 | orchestrator | 2026-01-10 14:44:32 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:32.722544 | orchestrator | 2026-01-10 14:44:32 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:32.723507 | orchestrator | 2026-01-10 14:44:32 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:32.723762 | orchestrator | 2026-01-10 14:44:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:35.797574 | orchestrator | 2026-01-10 14:44:35 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:35.798373 | orchestrator | 2026-01-10 14:44:35 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:35.799121 | orchestrator | 2026-01-10 14:44:35 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:35.800116 | orchestrator | 2026-01-10 14:44:35 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:35.801600 | orchestrator | 2026-01-10 14:44:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:38.855392 | orchestrator | 2026-01-10 14:44:38 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:38.856655 | orchestrator | 2026-01-10 14:44:38 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:38.858050 | orchestrator | 2026-01-10 14:44:38 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:38.860246 | orchestrator | 2026-01-10 14:44:38 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:38.860292 | orchestrator | 2026-01-10 14:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:41.899274 | orchestrator | 2026-01-10 14:44:41 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:41.901172 | orchestrator | 2026-01-10 14:44:41 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:41.903467 | orchestrator | 2026-01-10 14:44:41 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:41.905475 | orchestrator | 2026-01-10 14:44:41 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:41.905538 | orchestrator | 2026-01-10 14:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:44.936847 | orchestrator | 2026-01-10 14:44:44 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:44.937495 | orchestrator | 2026-01-10 14:44:44 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:44.941059 | orchestrator | 2026-01-10 14:44:44 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:44.941097 | orchestrator | 2026-01-10 14:44:44 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:44.941218 | orchestrator | 2026-01-10 14:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:47.975048 | orchestrator | 2026-01-10 14:44:47 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:47.975662 | orchestrator | 2026-01-10 14:44:47 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:47.976096 | orchestrator | 2026-01-10 14:44:47 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:47.976801 | orchestrator | 2026-01-10 14:44:47 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:47.976824 | orchestrator | 2026-01-10 14:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:51.024413 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:51.025285 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:51.026057 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:51.026942 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:51.026972 | orchestrator | 2026-01-10 14:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:54.146720 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:54.146769 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:54.146775 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:54.146779 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:54.146783 | orchestrator | 2026-01-10 14:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:57.121721 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:44:57.121792 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:44:57.121799 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:44:57.122348 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:44:57.122390 | orchestrator | 2026-01-10 14:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:00.143825 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:00.145009 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:45:00.146777 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:00.147435 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:00.147681 | orchestrator | 2026-01-10 14:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:03.188870 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:03.189549 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:45:03.189811 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:03.190535 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:03.190564 | orchestrator | 2026-01-10 14:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:06.229613 | orchestrator | 2026-01-10 14:45:06 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:06.229926 | orchestrator | 2026-01-10 14:45:06 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:45:06.230636 | orchestrator | 2026-01-10 14:45:06 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:06.231441 | orchestrator | 2026-01-10 14:45:06 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:06.231472 | orchestrator | 2026-01-10 14:45:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:09.295630 | orchestrator | 2026-01-10 14:45:09 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:09.297524 | orchestrator | 2026-01-10 14:45:09 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:45:09.299084 | orchestrator | 2026-01-10 14:45:09 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:09.300804 | orchestrator | 2026-01-10 14:45:09 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:09.300844 | orchestrator | 2026-01-10 14:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:12.351951 | orchestrator | 2026-01-10 14:45:12 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:12.352899 | orchestrator | 2026-01-10 14:45:12 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state STARTED 2026-01-10 14:45:12.352924 | orchestrator | 2026-01-10 14:45:12 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:12.352930 | orchestrator | 2026-01-10 14:45:12 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:12.352936 | orchestrator | 2026-01-10 14:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:15.391520 | orchestrator | 2026-01-10 14:45:15 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:15.393782 | orchestrator | 2026-01-10 14:45:15 | INFO  | Task bf4652d2-b509-4feb-b3bd-4577397873a8 is in state SUCCESS 2026-01-10 14:45:15.396097 | orchestrator | 2026-01-10 14:45:15.396175 | orchestrator | 2026-01-10 14:45:15.396187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:45:15.396195 | orchestrator | 2026-01-10 14:45:15.396202 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:45:15.396211 | orchestrator | Saturday 10 January 2026 14:44:15 +0000 (0:00:00.146) 0:00:00.146 ****** 2026-01-10 14:45:15.396218 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:15.396227 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:15.396234 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:15.396241 | orchestrator | 2026-01-10 14:45:15.396248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:45:15.396255 | orchestrator | Saturday 10 January 2026 14:44:15 +0000 (0:00:00.289) 0:00:00.435 ****** 2026-01-10 14:45:15.396262 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:45:15.396269 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:45:15.396330 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:45:15.396338 | orchestrator | 2026-01-10 14:45:15.396344 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-10 14:45:15.396351 | orchestrator | 2026-01-10 14:45:15.396358 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-10 14:45:15.396386 | orchestrator | Saturday 10 January 2026 14:44:17 +0000 (0:00:01.407) 0:00:01.843 ****** 2026-01-10 14:45:15.396394 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:15.396400 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:15.396407 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:15.396413 | orchestrator | 2026-01-10 14:45:15.396420 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:15.396428 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396436 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396443 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396449 | orchestrator | 2026-01-10 14:45:15.396456 | orchestrator | 2026-01-10 14:45:15.396463 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:15.396469 | orchestrator | Saturday 10 January 2026 14:44:18 +0000 (0:00:00.910) 0:00:02.754 ****** 2026-01-10 14:45:15.396476 | orchestrator | =============================================================================== 2026-01-10 14:45:15.396482 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2026-01-10 14:45:15.396489 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.91s 2026-01-10 14:45:15.396496 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-01-10 14:45:15.396503 | orchestrator | 2026-01-10 14:45:15.396509 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:45:15.396517 | orchestrator | 2.16.14 2026-01-10 14:45:15.396524 | orchestrator | 2026-01-10 14:45:15.396530 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-10 14:45:15.396536 | orchestrator | 2026-01-10 14:45:15.396542 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-10 14:45:15.396548 | orchestrator | Saturday 10 January 2026 14:42:50 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-10 14:45:15.396555 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396561 | orchestrator | 2026-01-10 14:45:15.396568 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-10 14:45:15.396574 | orchestrator | Saturday 10 January 2026 14:42:52 +0000 (0:00:02.453) 0:00:02.706 ****** 2026-01-10 14:45:15.396580 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396587 | orchestrator | 2026-01-10 14:45:15.396594 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-10 14:45:15.396601 | orchestrator | Saturday 10 January 2026 14:42:53 +0000 (0:00:00.961) 0:00:03.668 ****** 2026-01-10 14:45:15.396607 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396614 | orchestrator | 2026-01-10 14:45:15.396621 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-10 14:45:15.396627 | orchestrator | Saturday 10 January 2026 14:42:54 +0000 (0:00:00.983) 0:00:04.652 ****** 2026-01-10 14:45:15.396634 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396641 | orchestrator | 2026-01-10 14:45:15.396648 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-10 14:45:15.396655 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:01.385) 0:00:06.037 ****** 2026-01-10 14:45:15.396662 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396669 | orchestrator | 2026-01-10 14:45:15.396675 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-10 14:45:15.396682 | orchestrator | Saturday 10 January 2026 14:42:56 +0000 (0:00:00.968) 0:00:07.005 ****** 2026-01-10 14:45:15.396688 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396695 | orchestrator | 2026-01-10 14:45:15.396702 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-10 14:45:15.396715 | orchestrator | Saturday 10 January 2026 14:42:57 +0000 (0:00:00.865) 0:00:07.870 ****** 2026-01-10 14:45:15.396720 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396726 | orchestrator | 2026-01-10 14:45:15.396732 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-10 14:45:15.396738 | orchestrator | Saturday 10 January 2026 14:42:59 +0000 (0:00:01.347) 0:00:09.217 ****** 2026-01-10 14:45:15.396743 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396749 | orchestrator | 2026-01-10 14:45:15.396755 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-10 14:45:15.396760 | orchestrator | Saturday 10 January 2026 14:43:00 +0000 (0:00:01.086) 0:00:10.304 ****** 2026-01-10 14:45:15.396767 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:15.396774 | orchestrator | 2026-01-10 14:45:15.396793 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-10 14:45:15.396801 | orchestrator | Saturday 10 January 2026 14:44:02 +0000 (0:01:02.626) 0:01:12.930 ****** 2026-01-10 14:45:15.396808 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:45:15.396815 | orchestrator | 2026-01-10 14:45:15.396823 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:45:15.396830 | orchestrator | 2026-01-10 14:45:15.396837 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:45:15.396844 | orchestrator | Saturday 10 January 2026 14:44:02 +0000 (0:00:00.146) 0:01:13.076 ****** 2026-01-10 14:45:15.396851 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.396857 | orchestrator | 2026-01-10 14:45:15.396864 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:45:15.396871 | orchestrator | 2026-01-10 14:45:15.396878 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:45:15.396885 | orchestrator | Saturday 10 January 2026 14:44:15 +0000 (0:00:12.188) 0:01:25.265 ****** 2026-01-10 14:45:15.396892 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.396899 | orchestrator | 2026-01-10 14:45:15.396907 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:45:15.396914 | orchestrator | 2026-01-10 14:45:15.396921 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:45:15.396928 | orchestrator | Saturday 10 January 2026 14:44:26 +0000 (0:00:11.377) 0:01:36.642 ****** 2026-01-10 14:45:15.396935 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.396942 | orchestrator | 2026-01-10 14:45:15.396949 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:15.396957 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:45:15.396966 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396973 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396980 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:15.396987 | orchestrator | 2026-01-10 14:45:15.396994 | orchestrator | 2026-01-10 14:45:15.397001 | orchestrator | 2026-01-10 14:45:15.397008 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:15.397015 | orchestrator | Saturday 10 January 2026 14:44:27 +0000 (0:00:01.028) 0:01:37.670 ****** 2026-01-10 14:45:15.397022 | orchestrator | =============================================================================== 2026-01-10 14:45:15.397030 | orchestrator | Create admin user ------------------------------------------------------ 62.63s 2026-01-10 14:45:15.397036 | orchestrator | Restart ceph manager service ------------------------------------------- 24.59s 2026-01-10 14:45:15.397044 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.45s 2026-01-10 14:45:15.397057 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.39s 2026-01-10 14:45:15.397064 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.35s 2026-01-10 14:45:15.397071 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.09s 2026-01-10 14:45:15.397077 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.98s 2026-01-10 14:45:15.397085 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2026-01-10 14:45:15.397092 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.96s 2026-01-10 14:45:15.397099 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2026-01-10 14:45:15.397106 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-01-10 14:45:15.397112 | orchestrator | 2026-01-10 14:45:15.397120 | orchestrator | 2026-01-10 14:45:15.397126 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:45:15.397133 | orchestrator | 2026-01-10 14:45:15.397141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:45:15.397148 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.248) 0:00:00.248 ****** 2026-01-10 14:45:15.397155 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:15.397162 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:15.397168 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:15.397175 | orchestrator | 2026-01-10 14:45:15.397182 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:45:15.397189 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.304) 0:00:00.552 ****** 2026-01-10 14:45:15.397196 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-10 14:45:15.397203 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-10 14:45:15.397210 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-10 14:45:15.397217 | orchestrator | 2026-01-10 14:45:15.397224 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-10 14:45:15.397231 | orchestrator | 2026-01-10 14:45:15.397238 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:45:15.397245 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.527) 0:00:01.080 ****** 2026-01-10 14:45:15.397342 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:15.397354 | orchestrator | 2026-01-10 14:45:15.397362 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-10 14:45:15.397369 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.637) 0:00:01.717 ****** 2026-01-10 14:45:15.397376 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-10 14:45:15.397383 | orchestrator | 2026-01-10 14:45:15.397406 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-10 14:45:15.397414 | orchestrator | Saturday 10 January 2026 14:41:58 +0000 (0:00:03.882) 0:00:05.600 ****** 2026-01-10 14:45:15.397421 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-10 14:45:15.397428 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-10 14:45:15.397435 | orchestrator | 2026-01-10 14:45:15.397442 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-10 14:45:15.397449 | orchestrator | Saturday 10 January 2026 14:42:05 +0000 (0:00:06.737) 0:00:12.338 ****** 2026-01-10 14:45:15.397455 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-10 14:45:15.397462 | orchestrator | 2026-01-10 14:45:15.397476 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-10 14:45:15.397483 | orchestrator | Saturday 10 January 2026 14:42:10 +0000 (0:00:04.631) 0:00:16.969 ****** 2026-01-10 14:45:15.397490 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:45:15.397505 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-10 14:45:15.397512 | orchestrator | 2026-01-10 14:45:15.397520 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-10 14:45:15.397527 | orchestrator | Saturday 10 January 2026 14:42:14 +0000 (0:00:04.641) 0:00:21.610 ****** 2026-01-10 14:45:15.397534 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:45:15.397540 | orchestrator | 2026-01-10 14:45:15.397547 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-10 14:45:15.397554 | orchestrator | Saturday 10 January 2026 14:42:18 +0000 (0:00:04.188) 0:00:25.799 ****** 2026-01-10 14:45:15.397562 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-10 14:45:15.397568 | orchestrator | 2026-01-10 14:45:15.397575 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-10 14:45:15.397583 | orchestrator | Saturday 10 January 2026 14:42:23 +0000 (0:00:04.707) 0:00:30.507 ****** 2026-01-10 14:45:15.397593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397782 | orchestrator | 2026-01-10 14:45:15.397789 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-10 14:45:15.397797 | orchestrator | Saturday 10 January 2026 14:42:27 +0000 (0:00:03.397) 0:00:33.905 ****** 2026-01-10 14:45:15.397804 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.397809 | orchestrator | 2026-01-10 14:45:15.397817 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-10 14:45:15.397823 | orchestrator | Saturday 10 January 2026 14:42:27 +0000 (0:00:00.125) 0:00:34.030 ****** 2026-01-10 14:45:15.397831 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.397838 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.397846 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.397853 | orchestrator | 2026-01-10 14:45:15.397860 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:45:15.397867 | orchestrator | Saturday 10 January 2026 14:42:27 +0000 (0:00:00.317) 0:00:34.347 ****** 2026-01-10 14:45:15.397875 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:15.397882 | orchestrator | 2026-01-10 14:45:15.397890 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-10 14:45:15.397896 | orchestrator | Saturday 10 January 2026 14:42:28 +0000 (0:00:00.699) 0:00:35.047 ****** 2026-01-10 14:45:15.397903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.397944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.397989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398155 | orchestrator | 2026-01-10 14:45:15.398163 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-10 14:45:15.398170 | orchestrator | Saturday 10 January 2026 14:42:35 +0000 (0:00:06.997) 0:00:42.044 ****** 2026-01-10 14:45:15.398178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398310 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.398318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398376 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.398383 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.398390 | orchestrator | 2026-01-10 14:45:15.398397 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-10 14:45:15.398404 | orchestrator | Saturday 10 January 2026 14:42:37 +0000 (0:00:02.144) 0:00:44.188 ****** 2026-01-10 14:45:15.398411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398479 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.398487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398545 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.398556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.398563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.398579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.398617 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.398625 | orchestrator | 2026-01-10 14:45:15.398631 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-10 14:45:15.398639 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:01.409) 0:00:45.597 ****** 2026-01-10 14:45:15.398651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398813 | orchestrator | 2026-01-10 14:45:15.398824 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-10 14:45:15.398832 | orchestrator | Saturday 10 January 2026 14:42:46 +0000 (0:00:07.756) 0:00:53.354 ****** 2026-01-10 14:45:15.398843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.398872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.398995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399010 | orchestrator | 2026-01-10 14:45:15.399017 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-10 14:45:15.399024 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:22.599) 0:01:15.954 ****** 2026-01-10 14:45:15.399031 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:45:15.399038 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:45:15.399045 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:45:15.399051 | orchestrator | 2026-01-10 14:45:15.399058 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-10 14:45:15.399069 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:06.051) 0:01:22.005 ****** 2026-01-10 14:45:15.399077 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:45:15.399089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:45:15.399096 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:45:15.399103 | orchestrator | 2026-01-10 14:45:15.399110 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-10 14:45:15.399116 | orchestrator | Saturday 10 January 2026 14:43:19 +0000 (0:00:03.994) 0:01:26.000 ****** 2026-01-10 14:45:15.399127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399362 | orchestrator | 2026-01-10 14:45:15.399369 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-10 14:45:15.399381 | orchestrator | Saturday 10 January 2026 14:43:23 +0000 (0:00:04.018) 0:01:30.018 ****** 2026-01-10 14:45:15.399392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399561 | orchestrator | 2026-01-10 14:45:15.399568 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:45:15.399575 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:03.459) 0:01:33.477 ****** 2026-01-10 14:45:15.399582 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.399590 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.399597 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.399604 | orchestrator | 2026-01-10 14:45:15.399611 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-10 14:45:15.399618 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:00.935) 0:01:34.413 ****** 2026-01-10 14:45:15.399632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.399647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399681 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.399693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.399712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399743 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.399809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-10 14:45:15.399825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:45:15.399832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:45:15.399862 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.399869 | orchestrator | 2026-01-10 14:45:15.399876 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-10 14:45:15.399883 | orchestrator | Saturday 10 January 2026 14:43:28 +0000 (0:00:00.908) 0:01:35.321 ****** 2026-01-10 14:45:15.399894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.399905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.399912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-10 14:45:15.399922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.399994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:45:15.400044 | orchestrator | 2026-01-10 14:45:15.400050 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:45:15.400056 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:04.824) 0:01:40.146 ****** 2026-01-10 14:45:15.400062 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:15.400069 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:15.400075 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:15.400082 | orchestrator | 2026-01-10 14:45:15.400088 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-10 14:45:15.400093 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:00.286) 0:01:40.432 ****** 2026-01-10 14:45:15.400099 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-10 14:45:15.400105 | orchestrator | 2026-01-10 14:45:15.400111 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-10 14:45:15.400116 | orchestrator | Saturday 10 January 2026 14:43:36 +0000 (0:00:02.923) 0:01:43.356 ****** 2026-01-10 14:45:15.400122 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:45:15.400127 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-10 14:45:15.400133 | orchestrator | 2026-01-10 14:45:15.400138 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-10 14:45:15.400144 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:02.882) 0:01:46.238 ****** 2026-01-10 14:45:15.400149 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400155 | orchestrator | 2026-01-10 14:45:15.400160 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:45:15.400167 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:17.249) 0:02:03.488 ****** 2026-01-10 14:45:15.400172 | orchestrator | 2026-01-10 14:45:15.400181 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:45:15.400186 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.277) 0:02:03.765 ****** 2026-01-10 14:45:15.400192 | orchestrator | 2026-01-10 14:45:15.400198 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:45:15.400203 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.066) 0:02:03.832 ****** 2026-01-10 14:45:15.400209 | orchestrator | 2026-01-10 14:45:15.400215 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-10 14:45:15.400221 | orchestrator | Saturday 10 January 2026 14:43:57 +0000 (0:00:00.071) 0:02:03.903 ****** 2026-01-10 14:45:15.400226 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400235 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400241 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400246 | orchestrator | 2026-01-10 14:45:15.400255 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-10 14:45:15.400261 | orchestrator | Saturday 10 January 2026 14:44:09 +0000 (0:00:12.224) 0:02:16.128 ****** 2026-01-10 14:45:15.400267 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400272 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400278 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400308 | orchestrator | 2026-01-10 14:45:15.400314 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-10 14:45:15.400319 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:11.699) 0:02:27.827 ****** 2026-01-10 14:45:15.400325 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400331 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400337 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400342 | orchestrator | 2026-01-10 14:45:15.400347 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-10 14:45:15.400353 | orchestrator | Saturday 10 January 2026 14:44:33 +0000 (0:00:12.990) 0:02:40.818 ****** 2026-01-10 14:45:15.400358 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400363 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400369 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400375 | orchestrator | 2026-01-10 14:45:15.400380 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-10 14:45:15.400386 | orchestrator | Saturday 10 January 2026 14:44:42 +0000 (0:00:08.467) 0:02:49.286 ****** 2026-01-10 14:45:15.400391 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400397 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400403 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400408 | orchestrator | 2026-01-10 14:45:15.400413 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-10 14:45:15.400419 | orchestrator | Saturday 10 January 2026 14:44:54 +0000 (0:00:12.244) 0:03:01.531 ****** 2026-01-10 14:45:15.400424 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400429 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:15.400434 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:15.400440 | orchestrator | 2026-01-10 14:45:15.400445 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-10 14:45:15.400450 | orchestrator | Saturday 10 January 2026 14:45:05 +0000 (0:00:10.836) 0:03:12.368 ****** 2026-01-10 14:45:15.400455 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:15.400461 | orchestrator | 2026-01-10 14:45:15.400466 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:15.400473 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:45:15.400480 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:45:15.400486 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:45:15.400491 | orchestrator | 2026-01-10 14:45:15.400497 | orchestrator | 2026-01-10 14:45:15.400502 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:15.400507 | orchestrator | Saturday 10 January 2026 14:45:13 +0000 (0:00:08.188) 0:03:20.557 ****** 2026-01-10 14:45:15.400513 | orchestrator | =============================================================================== 2026-01-10 14:45:15.400519 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.60s 2026-01-10 14:45:15.400526 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.25s 2026-01-10 14:45:15.400533 | orchestrator | designate : Restart designate-central container ------------------------ 12.99s 2026-01-10 14:45:15.400546 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.24s 2026-01-10 14:45:15.400552 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.22s 2026-01-10 14:45:15.400558 | orchestrator | designate : Restart designate-api container ---------------------------- 11.70s 2026-01-10 14:45:15.400564 | orchestrator | designate : Restart designate-worker container ------------------------- 10.84s 2026-01-10 14:45:15.400571 | orchestrator | designate : Restart designate-producer container ------------------------ 8.47s 2026-01-10 14:45:15.400577 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.19s 2026-01-10 14:45:15.400583 | orchestrator | designate : Copying over config.json files for services ----------------- 7.76s 2026-01-10 14:45:15.400589 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.00s 2026-01-10 14:45:15.400595 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.74s 2026-01-10 14:45:15.400600 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.05s 2026-01-10 14:45:15.400606 | orchestrator | designate : Check designate containers ---------------------------------- 4.82s 2026-01-10 14:45:15.400618 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.71s 2026-01-10 14:45:15.400624 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.64s 2026-01-10 14:45:15.400630 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.63s 2026-01-10 14:45:15.400636 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.19s 2026-01-10 14:45:15.400642 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.02s 2026-01-10 14:45:15.400647 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.99s 2026-01-10 14:45:15.400653 | orchestrator | 2026-01-10 14:45:15 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:15.400664 | orchestrator | 2026-01-10 14:45:15 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:15.400670 | orchestrator | 2026-01-10 14:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:18.426500 | orchestrator | 2026-01-10 14:45:18 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:18.426798 | orchestrator | 2026-01-10 14:45:18 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:18.427382 | orchestrator | 2026-01-10 14:45:18 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:18.427939 | orchestrator | 2026-01-10 14:45:18 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:18.428036 | orchestrator | 2026-01-10 14:45:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:21.459214 | orchestrator | 2026-01-10 14:45:21 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:21.460902 | orchestrator | 2026-01-10 14:45:21 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:21.463179 | orchestrator | 2026-01-10 14:45:21 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:21.465146 | orchestrator | 2026-01-10 14:45:21 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:21.465214 | orchestrator | 2026-01-10 14:45:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:24.510737 | orchestrator | 2026-01-10 14:45:24 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:24.519082 | orchestrator | 2026-01-10 14:45:24 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:24.519907 | orchestrator | 2026-01-10 14:45:24 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:24.524745 | orchestrator | 2026-01-10 14:45:24 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:24.524832 | orchestrator | 2026-01-10 14:45:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:27.573094 | orchestrator | 2026-01-10 14:45:27 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:27.574208 | orchestrator | 2026-01-10 14:45:27 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:27.576049 | orchestrator | 2026-01-10 14:45:27 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:27.577237 | orchestrator | 2026-01-10 14:45:27 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:27.577457 | orchestrator | 2026-01-10 14:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:30.623628 | orchestrator | 2026-01-10 14:45:30 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:30.625483 | orchestrator | 2026-01-10 14:45:30 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:30.627892 | orchestrator | 2026-01-10 14:45:30 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:30.630080 | orchestrator | 2026-01-10 14:45:30 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:30.630460 | orchestrator | 2026-01-10 14:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:33.683776 | orchestrator | 2026-01-10 14:45:33 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:33.685465 | orchestrator | 2026-01-10 14:45:33 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:33.688041 | orchestrator | 2026-01-10 14:45:33 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:33.690427 | orchestrator | 2026-01-10 14:45:33 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:33.690468 | orchestrator | 2026-01-10 14:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:36.739861 | orchestrator | 2026-01-10 14:45:36 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:36.740666 | orchestrator | 2026-01-10 14:45:36 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:36.743509 | orchestrator | 2026-01-10 14:45:36 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:36.745506 | orchestrator | 2026-01-10 14:45:36 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:36.745569 | orchestrator | 2026-01-10 14:45:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:39.789978 | orchestrator | 2026-01-10 14:45:39 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:39.790603 | orchestrator | 2026-01-10 14:45:39 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:39.791435 | orchestrator | 2026-01-10 14:45:39 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:39.792280 | orchestrator | 2026-01-10 14:45:39 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:39.792305 | orchestrator | 2026-01-10 14:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:42.857310 | orchestrator | 2026-01-10 14:45:42 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:42.857669 | orchestrator | 2026-01-10 14:45:42 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:42.858671 | orchestrator | 2026-01-10 14:45:42 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:42.859344 | orchestrator | 2026-01-10 14:45:42 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:42.859379 | orchestrator | 2026-01-10 14:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:45.899225 | orchestrator | 2026-01-10 14:45:45 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:45.900504 | orchestrator | 2026-01-10 14:45:45 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:45.901224 | orchestrator | 2026-01-10 14:45:45 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:45.902000 | orchestrator | 2026-01-10 14:45:45 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:45.902314 | orchestrator | 2026-01-10 14:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:48.935570 | orchestrator | 2026-01-10 14:45:48 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:48.936897 | orchestrator | 2026-01-10 14:45:48 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:48.975699 | orchestrator | 2026-01-10 14:45:48 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state STARTED 2026-01-10 14:45:48.975788 | orchestrator | 2026-01-10 14:45:48 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:48.975800 | orchestrator | 2026-01-10 14:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:52.030438 | orchestrator | 2026-01-10 14:45:52 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:52.031627 | orchestrator | 2026-01-10 14:45:52 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:52.032671 | orchestrator | 2026-01-10 14:45:52 | INFO  | Task 88403b23-a2f4-4987-a703-b4be1e7fd65b is in state SUCCESS 2026-01-10 14:45:52.036712 | orchestrator | 2026-01-10 14:45:52 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:52.036788 | orchestrator | 2026-01-10 14:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:55.064746 | orchestrator | 2026-01-10 14:45:55 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:55.066193 | orchestrator | 2026-01-10 14:45:55 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:55.066828 | orchestrator | 2026-01-10 14:45:55 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:55.067660 | orchestrator | 2026-01-10 14:45:55 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:45:55.067678 | orchestrator | 2026-01-10 14:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:58.109520 | orchestrator | 2026-01-10 14:45:58 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:45:58.110214 | orchestrator | 2026-01-10 14:45:58 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:45:58.110815 | orchestrator | 2026-01-10 14:45:58 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:45:58.111633 | orchestrator | 2026-01-10 14:45:58 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:45:58.111750 | orchestrator | 2026-01-10 14:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:01.147212 | orchestrator | 2026-01-10 14:46:01 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:01.149325 | orchestrator | 2026-01-10 14:46:01 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:01.151324 | orchestrator | 2026-01-10 14:46:01 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:01.154058 | orchestrator | 2026-01-10 14:46:01 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:01.154103 | orchestrator | 2026-01-10 14:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:04.195771 | orchestrator | 2026-01-10 14:46:04 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:04.197344 | orchestrator | 2026-01-10 14:46:04 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:04.199024 | orchestrator | 2026-01-10 14:46:04 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:04.200141 | orchestrator | 2026-01-10 14:46:04 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:04.200182 | orchestrator | 2026-01-10 14:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:07.262310 | orchestrator | 2026-01-10 14:46:07 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:07.265216 | orchestrator | 2026-01-10 14:46:07 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:07.267049 | orchestrator | 2026-01-10 14:46:07 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:07.268341 | orchestrator | 2026-01-10 14:46:07 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:07.268369 | orchestrator | 2026-01-10 14:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:10.310142 | orchestrator | 2026-01-10 14:46:10 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:10.311572 | orchestrator | 2026-01-10 14:46:10 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:10.313781 | orchestrator | 2026-01-10 14:46:10 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:10.314542 | orchestrator | 2026-01-10 14:46:10 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:10.315739 | orchestrator | 2026-01-10 14:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:13.373056 | orchestrator | 2026-01-10 14:46:13 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:13.375618 | orchestrator | 2026-01-10 14:46:13 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:13.378464 | orchestrator | 2026-01-10 14:46:13 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:13.380383 | orchestrator | 2026-01-10 14:46:13 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:13.380850 | orchestrator | 2026-01-10 14:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:16.432476 | orchestrator | 2026-01-10 14:46:16 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state STARTED 2026-01-10 14:46:16.434371 | orchestrator | 2026-01-10 14:46:16 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:16.434520 | orchestrator | 2026-01-10 14:46:16 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:16.436473 | orchestrator | 2026-01-10 14:46:16 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:16.436501 | orchestrator | 2026-01-10 14:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:19.505569 | orchestrator | 2026-01-10 14:46:19.505756 | orchestrator | 2026-01-10 14:46:19.505776 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:46:19.505785 | orchestrator | 2026-01-10 14:46:19.505792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:46:19.505800 | orchestrator | Saturday 10 January 2026 14:45:19 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-01-10 14:46:19.505806 | orchestrator | ok: [testbed-manager] 2026-01-10 14:46:19.505814 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:19.505820 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:19.505826 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:19.505832 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:19.505838 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:19.505845 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:19.505851 | orchestrator | 2026-01-10 14:46:19.505858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:46:19.505898 | orchestrator | Saturday 10 January 2026 14:45:20 +0000 (0:00:00.784) 0:00:01.061 ****** 2026-01-10 14:46:19.505907 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505913 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505920 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505928 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505935 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505942 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505949 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:19.505956 | orchestrator | 2026-01-10 14:46:19.505963 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:46:19.505970 | orchestrator | 2026-01-10 14:46:19.505977 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-10 14:46:19.505984 | orchestrator | Saturday 10 January 2026 14:45:21 +0000 (0:00:00.733) 0:00:01.795 ****** 2026-01-10 14:46:19.505992 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:46:19.506000 | orchestrator | 2026-01-10 14:46:19.506008 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-10 14:46:19.506064 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:02.005) 0:00:03.800 ****** 2026-01-10 14:46:19.506072 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-10 14:46:19.506078 | orchestrator | 2026-01-10 14:46:19.506085 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-10 14:46:19.506092 | orchestrator | Saturday 10 January 2026 14:45:27 +0000 (0:00:03.905) 0:00:07.706 ****** 2026-01-10 14:46:19.506099 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-10 14:46:19.506108 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-10 14:46:19.506115 | orchestrator | 2026-01-10 14:46:19.506121 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-10 14:46:19.506127 | orchestrator | Saturday 10 January 2026 14:45:33 +0000 (0:00:05.755) 0:00:13.462 ****** 2026-01-10 14:46:19.506133 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-10 14:46:19.506140 | orchestrator | 2026-01-10 14:46:19.506146 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-10 14:46:19.506153 | orchestrator | Saturday 10 January 2026 14:45:36 +0000 (0:00:03.387) 0:00:16.850 ****** 2026-01-10 14:46:19.506181 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:46:19.506188 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-10 14:46:19.506195 | orchestrator | 2026-01-10 14:46:19.506201 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-10 14:46:19.506207 | orchestrator | Saturday 10 January 2026 14:45:40 +0000 (0:00:03.921) 0:00:20.772 ****** 2026-01-10 14:46:19.506236 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-10 14:46:19.506242 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-10 14:46:19.506248 | orchestrator | 2026-01-10 14:46:19.506254 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-10 14:46:19.506260 | orchestrator | Saturday 10 January 2026 14:45:46 +0000 (0:00:06.171) 0:00:26.943 ****** 2026-01-10 14:46:19.506267 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-10 14:46:19.506274 | orchestrator | 2026-01-10 14:46:19.506281 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:19.506288 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506296 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506303 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506310 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506316 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506341 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506349 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:19.506356 | orchestrator | 2026-01-10 14:46:19.506363 | orchestrator | 2026-01-10 14:46:19.506369 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:19.506375 | orchestrator | Saturday 10 January 2026 14:45:51 +0000 (0:00:04.774) 0:00:31.718 ****** 2026-01-10 14:46:19.506382 | orchestrator | =============================================================================== 2026-01-10 14:46:19.506388 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.17s 2026-01-10 14:46:19.506394 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.76s 2026-01-10 14:46:19.506408 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.77s 2026-01-10 14:46:19.506416 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2026-01-10 14:46:19.506424 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.91s 2026-01-10 14:46:19.506432 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.39s 2026-01-10 14:46:19.506439 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.01s 2026-01-10 14:46:19.506445 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-01-10 14:46:19.506452 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-01-10 14:46:19.506461 | orchestrator | 2026-01-10 14:46:19.506468 | orchestrator | 2026-01-10 14:46:19.506476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:46:19.506483 | orchestrator | 2026-01-10 14:46:19.506491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:46:19.506497 | orchestrator | Saturday 10 January 2026 14:44:06 +0000 (0:00:00.312) 0:00:00.312 ****** 2026-01-10 14:46:19.506547 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:19.506555 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:19.506562 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:19.506568 | orchestrator | 2026-01-10 14:46:19.506574 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:46:19.506581 | orchestrator | Saturday 10 January 2026 14:44:06 +0000 (0:00:00.364) 0:00:00.676 ****** 2026-01-10 14:46:19.506587 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-10 14:46:19.506593 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-10 14:46:19.506599 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-10 14:46:19.506606 | orchestrator | 2026-01-10 14:46:19.506612 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-10 14:46:19.506618 | orchestrator | 2026-01-10 14:46:19.506624 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:46:19.506629 | orchestrator | Saturday 10 January 2026 14:44:06 +0000 (0:00:00.417) 0:00:01.093 ****** 2026-01-10 14:46:19.506635 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:46:19.506641 | orchestrator | 2026-01-10 14:46:19.506648 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-10 14:46:19.506655 | orchestrator | Saturday 10 January 2026 14:44:07 +0000 (0:00:00.828) 0:00:01.922 ****** 2026-01-10 14:46:19.506662 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-10 14:46:19.506668 | orchestrator | 2026-01-10 14:46:19.506674 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-10 14:46:19.506680 | orchestrator | Saturday 10 January 2026 14:44:11 +0000 (0:00:03.471) 0:00:05.394 ****** 2026-01-10 14:46:19.506686 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-10 14:46:19.506693 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-10 14:46:19.506700 | orchestrator | 2026-01-10 14:46:19.506706 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-10 14:46:19.506712 | orchestrator | Saturday 10 January 2026 14:44:19 +0000 (0:00:07.934) 0:00:13.329 ****** 2026-01-10 14:46:19.506719 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:46:19.506726 | orchestrator | 2026-01-10 14:46:19.506733 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-10 14:46:19.506739 | orchestrator | Saturday 10 January 2026 14:44:23 +0000 (0:00:04.168) 0:00:17.498 ****** 2026-01-10 14:46:19.506745 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:46:19.506751 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-10 14:46:19.506757 | orchestrator | 2026-01-10 14:46:19.506763 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-10 14:46:19.506770 | orchestrator | Saturday 10 January 2026 14:44:28 +0000 (0:00:04.810) 0:00:22.308 ****** 2026-01-10 14:46:19.506777 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:46:19.506783 | orchestrator | 2026-01-10 14:46:19.506790 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-10 14:46:19.506797 | orchestrator | Saturday 10 January 2026 14:44:31 +0000 (0:00:03.317) 0:00:25.626 ****** 2026-01-10 14:46:19.506803 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-10 14:46:19.506809 | orchestrator | 2026-01-10 14:46:19.506816 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-10 14:46:19.506822 | orchestrator | Saturday 10 January 2026 14:44:35 +0000 (0:00:04.232) 0:00:29.858 ****** 2026-01-10 14:46:19.506829 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.506836 | orchestrator | 2026-01-10 14:46:19.506842 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-10 14:46:19.506868 | orchestrator | Saturday 10 January 2026 14:44:39 +0000 (0:00:04.105) 0:00:33.963 ****** 2026-01-10 14:46:19.506876 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.506882 | orchestrator | 2026-01-10 14:46:19.506889 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-10 14:46:19.506895 | orchestrator | Saturday 10 January 2026 14:44:44 +0000 (0:00:04.671) 0:00:38.635 ****** 2026-01-10 14:46:19.506902 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.506909 | orchestrator | 2026-01-10 14:46:19.506915 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-10 14:46:19.506922 | orchestrator | Saturday 10 January 2026 14:44:47 +0000 (0:00:03.517) 0:00:42.152 ****** 2026-01-10 14:46:19.506938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.506948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.506956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.506963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.506982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.506993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507001 | orchestrator | 2026-01-10 14:46:19.507008 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-10 14:46:19.507014 | orchestrator | Saturday 10 January 2026 14:44:49 +0000 (0:00:01.381) 0:00:43.534 ****** 2026-01-10 14:46:19.507021 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507028 | orchestrator | 2026-01-10 14:46:19.507034 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-10 14:46:19.507041 | orchestrator | Saturday 10 January 2026 14:44:49 +0000 (0:00:00.112) 0:00:43.646 ****** 2026-01-10 14:46:19.507047 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507053 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:19.507060 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:19.507066 | orchestrator | 2026-01-10 14:46:19.507073 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-10 14:46:19.507080 | orchestrator | Saturday 10 January 2026 14:44:49 +0000 (0:00:00.450) 0:00:44.097 ****** 2026-01-10 14:46:19.507086 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:46:19.507093 | orchestrator | 2026-01-10 14:46:19.507100 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-10 14:46:19.507107 | orchestrator | Saturday 10 January 2026 14:44:50 +0000 (0:00:01.010) 0:00:45.108 ****** 2026-01-10 14:46:19.507113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507170 | orchestrator | 2026-01-10 14:46:19.507177 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-10 14:46:19.507189 | orchestrator | Saturday 10 January 2026 14:44:53 +0000 (0:00:02.391) 0:00:47.499 ****** 2026-01-10 14:46:19.507196 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:19.507202 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:19.507209 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:19.507272 | orchestrator | 2026-01-10 14:46:19.507279 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:46:19.507286 | orchestrator | Saturday 10 January 2026 14:44:54 +0000 (0:00:00.885) 0:00:48.384 ****** 2026-01-10 14:46:19.507293 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:46:19.507299 | orchestrator | 2026-01-10 14:46:19.507305 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-10 14:46:19.507312 | orchestrator | Saturday 10 January 2026 14:44:56 +0000 (0:00:02.663) 0:00:51.047 ****** 2026-01-10 14:46:19.507325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507387 | orchestrator | 2026-01-10 14:46:19.507393 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-10 14:46:19.507399 | orchestrator | Saturday 10 January 2026 14:45:01 +0000 (0:00:04.264) 0:00:55.311 ****** 2026-01-10 14:46:19.507409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507422 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507452 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:19.507465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507483 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:19.507490 | orchestrator | 2026-01-10 14:46:19.507496 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-10 14:46:19.507503 | orchestrator | Saturday 10 January 2026 14:45:02 +0000 (0:00:01.505) 0:00:56.816 ****** 2026-01-10 14:46:19.507510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507529 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': 2026-01-10 14:46:19 | INFO  | Task dd0f9ce8-a31e-44fa-bbeb-aa26be471e93 is in state SUCCESS 2026-01-10 14:46:19.507553 | orchestrator | '9511'}}}})  2026-01-10 14:46:19.507565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507572 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:19.507579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507598 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:19.507605 | orchestrator | 2026-01-10 14:46:19.507611 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-10 14:46:19.507619 | orchestrator | Saturday 10 January 2026 14:45:04 +0000 (0:00:01.816) 0:00:58.633 ****** 2026-01-10 14:46:19.507625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507699 | orchestrator | 2026-01-10 14:46:19.507705 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-10 14:46:19.507711 | orchestrator | Saturday 10 January 2026 14:45:07 +0000 (0:00:02.952) 0:01:01.586 ****** 2026-01-10 14:46:19.507722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507779 | orchestrator | 2026-01-10 14:46:19.507786 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-10 14:46:19.507792 | orchestrator | Saturday 10 January 2026 14:45:12 +0000 (0:00:04.941) 0:01:06.528 ****** 2026-01-10 14:46:19.507804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507824 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507843 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:19.507860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-10 14:46:19.507866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:46:19.507878 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:19.507884 | orchestrator | 2026-01-10 14:46:19.507890 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-10 14:46:19.507897 | orchestrator | Saturday 10 January 2026 14:45:13 +0000 (0:00:01.237) 0:01:07.765 ****** 2026-01-10 14:46:19.507904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-10 14:46:19.507936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:46:19.507964 | orchestrator | 2026-01-10 14:46:19.507971 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:46:19.507979 | orchestrator | Saturday 10 January 2026 14:45:17 +0000 (0:00:03.534) 0:01:11.299 ****** 2026-01-10 14:46:19.507985 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:19.507992 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:19.507999 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:19.508005 | orchestrator | 2026-01-10 14:46:19.508011 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-10 14:46:19.508018 | orchestrator | Saturday 10 January 2026 14:45:17 +0000 (0:00:00.655) 0:01:11.955 ****** 2026-01-10 14:46:19.508024 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.508030 | orchestrator | 2026-01-10 14:46:19.508037 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-10 14:46:19.508043 | orchestrator | Saturday 10 January 2026 14:45:20 +0000 (0:00:02.545) 0:01:14.501 ****** 2026-01-10 14:46:19.508049 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.508056 | orchestrator | 2026-01-10 14:46:19.508062 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-10 14:46:19.508068 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:02.895) 0:01:17.396 ****** 2026-01-10 14:46:19.508074 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.508081 | orchestrator | 2026-01-10 14:46:19.508087 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:46:19.508093 | orchestrator | Saturday 10 January 2026 14:45:41 +0000 (0:00:17.961) 0:01:35.358 ****** 2026-01-10 14:46:19.508100 | orchestrator | 2026-01-10 14:46:19.508106 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:46:19.508113 | orchestrator | Saturday 10 January 2026 14:45:41 +0000 (0:00:00.208) 0:01:35.567 ****** 2026-01-10 14:46:19.508120 | orchestrator | 2026-01-10 14:46:19.508127 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:46:19.508134 | orchestrator | Saturday 10 January 2026 14:45:41 +0000 (0:00:00.226) 0:01:35.793 ****** 2026-01-10 14:46:19.508150 | orchestrator | 2026-01-10 14:46:19.508158 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-10 14:46:19.508164 | orchestrator | Saturday 10 January 2026 14:45:41 +0000 (0:00:00.166) 0:01:35.959 ****** 2026-01-10 14:46:19.508171 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.508178 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:19.508191 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:19.508198 | orchestrator | 2026-01-10 14:46:19.508204 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-10 14:46:19.508235 | orchestrator | Saturday 10 January 2026 14:46:02 +0000 (0:00:20.775) 0:01:56.735 ****** 2026-01-10 14:46:19.508243 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:19.508250 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:19.508256 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:19.508263 | orchestrator | 2026-01-10 14:46:19.508270 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:19.508276 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:46:19.508289 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:46:19.508296 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:46:19.508303 | orchestrator | 2026-01-10 14:46:19.508310 | orchestrator | 2026-01-10 14:46:19.508317 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:19.508324 | orchestrator | Saturday 10 January 2026 14:46:16 +0000 (0:00:13.545) 0:02:10.281 ****** 2026-01-10 14:46:19.508330 | orchestrator | =============================================================================== 2026-01-10 14:46:19.508337 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.78s 2026-01-10 14:46:19.508343 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.96s 2026-01-10 14:46:19.508349 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.55s 2026-01-10 14:46:19.508356 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.94s 2026-01-10 14:46:19.508362 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.94s 2026-01-10 14:46:19.508368 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.81s 2026-01-10 14:46:19.508375 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.67s 2026-01-10 14:46:19.508381 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.26s 2026-01-10 14:46:19.508388 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.23s 2026-01-10 14:46:19.508395 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.17s 2026-01-10 14:46:19.508402 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.11s 2026-01-10 14:46:19.508409 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.53s 2026-01-10 14:46:19.508415 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.52s 2026-01-10 14:46:19.508422 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.47s 2026-01-10 14:46:19.508429 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.32s 2026-01-10 14:46:19.508436 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.95s 2026-01-10 14:46:19.508442 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.90s 2026-01-10 14:46:19.508449 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.66s 2026-01-10 14:46:19.508456 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.55s 2026-01-10 14:46:19.508470 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2026-01-10 14:46:19.508477 | orchestrator | 2026-01-10 14:46:19 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:19.508622 | orchestrator | 2026-01-10 14:46:19 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:19.508635 | orchestrator | 2026-01-10 14:46:19 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:19.509683 | orchestrator | 2026-01-10 14:46:19 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:19.510328 | orchestrator | 2026-01-10 14:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:22.558805 | orchestrator | 2026-01-10 14:46:22 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:22.559972 | orchestrator | 2026-01-10 14:46:22 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:22.561569 | orchestrator | 2026-01-10 14:46:22 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:22.563268 | orchestrator | 2026-01-10 14:46:22 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:22.563307 | orchestrator | 2026-01-10 14:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:25.614790 | orchestrator | 2026-01-10 14:46:25 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:25.614881 | orchestrator | 2026-01-10 14:46:25 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:25.615449 | orchestrator | 2026-01-10 14:46:25 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:25.616309 | orchestrator | 2026-01-10 14:46:25 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:25.616428 | orchestrator | 2026-01-10 14:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:28.648885 | orchestrator | 2026-01-10 14:46:28 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:28.649068 | orchestrator | 2026-01-10 14:46:28 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:28.649802 | orchestrator | 2026-01-10 14:46:28 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:28.650344 | orchestrator | 2026-01-10 14:46:28 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:28.650375 | orchestrator | 2026-01-10 14:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:31.696483 | orchestrator | 2026-01-10 14:46:31 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:31.696545 | orchestrator | 2026-01-10 14:46:31 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state STARTED 2026-01-10 14:46:31.696553 | orchestrator | 2026-01-10 14:46:31 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:31.696558 | orchestrator | 2026-01-10 14:46:31 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:31.696564 | orchestrator | 2026-01-10 14:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:34.743718 | orchestrator | 2026-01-10 14:46:34 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:34.748441 | orchestrator | 2026-01-10 14:46:34 | INFO  | Task 7e8cd5a3-107e-4768-83e6-68fd659e1491 is in state SUCCESS 2026-01-10 14:46:34.750070 | orchestrator | 2026-01-10 14:46:34.750116 | orchestrator | 2026-01-10 14:46:34.750123 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:46:34.750141 | orchestrator | 2026-01-10 14:46:34.750145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:46:34.750149 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.462) 0:00:00.462 ****** 2026-01-10 14:46:34.750153 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:34.750157 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:34.750161 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:34.750165 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:34.750171 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:34.750178 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:34.750184 | orchestrator | 2026-01-10 14:46:34.750190 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:46:34.750209 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.877) 0:00:01.339 ****** 2026-01-10 14:46:34.750215 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-10 14:46:34.750244 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-10 14:46:34.750251 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-10 14:46:34.750257 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-10 14:46:34.750263 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-10 14:46:34.750269 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-10 14:46:34.750275 | orchestrator | 2026-01-10 14:46:34.750281 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-10 14:46:34.750287 | orchestrator | 2026-01-10 14:46:34.750316 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:46:34.750323 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.706) 0:00:02.046 ****** 2026-01-10 14:46:34.750330 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:46:34.750363 | orchestrator | 2026-01-10 14:46:34.750371 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-10 14:46:34.750377 | orchestrator | Saturday 10 January 2026 14:41:56 +0000 (0:00:01.048) 0:00:03.095 ****** 2026-01-10 14:46:34.750383 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:34.750393 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:34.750399 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:34.750441 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:34.750449 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:34.750455 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:34.750461 | orchestrator | 2026-01-10 14:46:34.750467 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-10 14:46:34.750474 | orchestrator | Saturday 10 January 2026 14:41:57 +0000 (0:00:01.128) 0:00:04.224 ****** 2026-01-10 14:46:34.750480 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:34.750522 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:34.750529 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:34.750558 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:34.750564 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:34.750568 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:34.750587 | orchestrator | 2026-01-10 14:46:34.750593 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-10 14:46:34.750613 | orchestrator | Saturday 10 January 2026 14:41:58 +0000 (0:00:01.003) 0:00:05.227 ****** 2026-01-10 14:46:34.750620 | orchestrator | ok: [testbed-node-0] => { 2026-01-10 14:46:34.750627 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750633 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750639 | orchestrator | } 2026-01-10 14:46:34.750646 | orchestrator | ok: [testbed-node-1] => { 2026-01-10 14:46:34.750652 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750658 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750663 | orchestrator | } 2026-01-10 14:46:34.750670 | orchestrator | ok: [testbed-node-2] => { 2026-01-10 14:46:34.750687 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750693 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750700 | orchestrator | } 2026-01-10 14:46:34.750706 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:46:34.750712 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750718 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750724 | orchestrator | } 2026-01-10 14:46:34.750728 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:46:34.750732 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750735 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750739 | orchestrator | } 2026-01-10 14:46:34.750743 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:46:34.750754 | orchestrator |  "changed": false, 2026-01-10 14:46:34.750758 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:46:34.750761 | orchestrator | } 2026-01-10 14:46:34.750765 | orchestrator | 2026-01-10 14:46:34.750769 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-10 14:46:34.750773 | orchestrator | Saturday 10 January 2026 14:41:59 +0000 (0:00:00.815) 0:00:06.043 ****** 2026-01-10 14:46:34.750817 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.750825 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.750831 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.750837 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.750843 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.750849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.750856 | orchestrator | 2026-01-10 14:46:34.750862 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-10 14:46:34.750868 | orchestrator | Saturday 10 January 2026 14:42:00 +0000 (0:00:00.603) 0:00:06.646 ****** 2026-01-10 14:46:34.750875 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-10 14:46:34.750881 | orchestrator | 2026-01-10 14:46:34.750886 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-10 14:46:34.750890 | orchestrator | Saturday 10 January 2026 14:42:03 +0000 (0:00:03.198) 0:00:09.845 ****** 2026-01-10 14:46:34.750894 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-10 14:46:34.750899 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-10 14:46:34.750903 | orchestrator | 2026-01-10 14:46:34.750918 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-10 14:46:34.750922 | orchestrator | Saturday 10 January 2026 14:42:10 +0000 (0:00:07.406) 0:00:17.252 ****** 2026-01-10 14:46:34.750926 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:46:34.750929 | orchestrator | 2026-01-10 14:46:34.750933 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-10 14:46:34.750937 | orchestrator | Saturday 10 January 2026 14:42:14 +0000 (0:00:04.093) 0:00:21.345 ****** 2026-01-10 14:46:34.750941 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:46:34.750956 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-10 14:46:34.750963 | orchestrator | 2026-01-10 14:46:34.750975 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-10 14:46:34.750999 | orchestrator | Saturday 10 January 2026 14:42:19 +0000 (0:00:04.598) 0:00:25.944 ****** 2026-01-10 14:46:34.751005 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:46:34.751012 | orchestrator | 2026-01-10 14:46:34.751018 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-10 14:46:34.751025 | orchestrator | Saturday 10 January 2026 14:42:23 +0000 (0:00:03.947) 0:00:29.892 ****** 2026-01-10 14:46:34.751031 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-10 14:46:34.751037 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-10 14:46:34.751044 | orchestrator | 2026-01-10 14:46:34.751050 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:46:34.751063 | orchestrator | Saturday 10 January 2026 14:42:32 +0000 (0:00:08.906) 0:00:38.799 ****** 2026-01-10 14:46:34.751069 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751075 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751081 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751088 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751094 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751099 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751103 | orchestrator | 2026-01-10 14:46:34.751106 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-10 14:46:34.751110 | orchestrator | Saturday 10 January 2026 14:42:33 +0000 (0:00:00.963) 0:00:39.762 ****** 2026-01-10 14:46:34.751114 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751118 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751122 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751125 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751129 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751133 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751136 | orchestrator | 2026-01-10 14:46:34.751140 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-10 14:46:34.751144 | orchestrator | Saturday 10 January 2026 14:42:37 +0000 (0:00:03.971) 0:00:43.734 ****** 2026-01-10 14:46:34.751148 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:34.751151 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:34.751155 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:34.751159 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:34.751163 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:34.751166 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:34.751170 | orchestrator | 2026-01-10 14:46:34.751174 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:46:34.751177 | orchestrator | Saturday 10 January 2026 14:42:39 +0000 (0:00:02.387) 0:00:46.121 ****** 2026-01-10 14:46:34.751181 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751185 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751189 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751212 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751217 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751221 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751229 | orchestrator | 2026-01-10 14:46:34.751233 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-10 14:46:34.751237 | orchestrator | Saturday 10 January 2026 14:42:42 +0000 (0:00:03.298) 0:00:49.420 ****** 2026-01-10 14:46:34.751246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751286 | orchestrator | 2026-01-10 14:46:34.751290 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-10 14:46:34.751294 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:04.671) 0:00:54.091 ****** 2026-01-10 14:46:34.751298 | orchestrator | [WARNING]: Skipped 2026-01-10 14:46:34.751302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-10 14:46:34.751309 | orchestrator | due to this access issue: 2026-01-10 14:46:34.751313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-10 14:46:34.751317 | orchestrator | a directory 2026-01-10 14:46:34.751321 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:46:34.751325 | orchestrator | 2026-01-10 14:46:34.751329 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:46:34.751335 | orchestrator | Saturday 10 January 2026 14:42:48 +0000 (0:00:01.182) 0:00:55.274 ****** 2026-01-10 14:46:34.751339 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:46:34.751344 | orchestrator | 2026-01-10 14:46:34.751347 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-10 14:46:34.751351 | orchestrator | Saturday 10 January 2026 14:42:50 +0000 (0:00:01.357) 0:00:56.632 ****** 2026-01-10 14:46:34.751355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751389 | orchestrator | 2026-01-10 14:46:34.751393 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-10 14:46:34.751397 | orchestrator | Saturday 10 January 2026 14:42:54 +0000 (0:00:04.031) 0:01:00.663 ****** 2026-01-10 14:46:34.751401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751405 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751415 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751426 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751437 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751445 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751453 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751456 | orchestrator | 2026-01-10 14:46:34.751460 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-10 14:46:34.751464 | orchestrator | Saturday 10 January 2026 14:42:58 +0000 (0:00:03.975) 0:01:04.639 ****** 2026-01-10 14:46:34.751470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751477 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751489 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751497 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751504 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751517 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751527 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751531 | orchestrator | 2026-01-10 14:46:34.751535 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-10 14:46:34.751539 | orchestrator | Saturday 10 January 2026 14:43:01 +0000 (0:00:03.373) 0:01:08.012 ****** 2026-01-10 14:46:34.751542 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751546 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751550 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751554 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751557 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751561 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751565 | orchestrator | 2026-01-10 14:46:34.751569 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-10 14:46:34.751575 | orchestrator | Saturday 10 January 2026 14:43:03 +0000 (0:00:02.478) 0:01:10.491 ****** 2026-01-10 14:46:34.751579 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751583 | orchestrator | 2026-01-10 14:46:34.751586 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-10 14:46:34.751590 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.094) 0:01:10.586 ****** 2026-01-10 14:46:34.751594 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751598 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751601 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751605 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751609 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751613 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751616 | orchestrator | 2026-01-10 14:46:34.751620 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-10 14:46:34.751624 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.593) 0:01:11.180 ****** 2026-01-10 14:46:34.751628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751632 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.751639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751643 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.751649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751653 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.751657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751661 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.751890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.751903 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.751907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.751916 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.751920 | orchestrator | 2026-01-10 14:46:34.751924 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-10 14:46:34.751928 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:02.907) 0:01:14.087 ****** 2026-01-10 14:46:34.751932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.751967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.751983 | orchestrator | 2026-01-10 14:46:34.751987 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-10 14:46:34.751991 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:04.429) 0:01:18.517 ****** 2026-01-10 14:46:34.751998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.752004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.752020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.752025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752029 | orchestrator | 2026-01-10 14:46:34.752033 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-10 14:46:34.752037 | orchestrator | Saturday 10 January 2026 14:43:18 +0000 (0:00:06.884) 0:01:25.402 ****** 2026-01-10 14:46:34.752044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752048 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752059 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752067 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752076 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752084 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752097 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752101 | orchestrator | 2026-01-10 14:46:34.752105 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-10 14:46:34.752109 | orchestrator | Saturday 10 January 2026 14:43:21 +0000 (0:00:03.077) 0:01:28.479 ****** 2026-01-10 14:46:34.752113 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752117 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752121 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:34.752124 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752128 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:34.752132 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:34.752135 | orchestrator | 2026-01-10 14:46:34.752139 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-10 14:46:34.752143 | orchestrator | Saturday 10 January 2026 14:43:24 +0000 (0:00:02.928) 0:01:31.407 ****** 2026-01-10 14:46:34.752147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752151 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752159 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752168 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.752191 | orchestrator | 2026-01-10 14:46:34.752259 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-10 14:46:34.752267 | orchestrator | Saturday 10 January 2026 14:43:28 +0000 (0:00:04.070) 0:01:35.478 ****** 2026-01-10 14:46:34.752272 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752278 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752284 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752290 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752296 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752302 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752308 | orchestrator | 2026-01-10 14:46:34.752313 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-10 14:46:34.752319 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:02.170) 0:01:37.649 ****** 2026-01-10 14:46:34.752325 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752331 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752337 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752343 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752349 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752361 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752368 | orchestrator | 2026-01-10 14:46:34.752374 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-10 14:46:34.752382 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:02.002) 0:01:39.651 ****** 2026-01-10 14:46:34.752385 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752393 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752397 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752400 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752404 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752408 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752411 | orchestrator | 2026-01-10 14:46:34.752415 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-10 14:46:34.752419 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:01.784) 0:01:41.436 ****** 2026-01-10 14:46:34.752422 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752448 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752451 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752455 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752459 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752463 | orchestrator | 2026-01-10 14:46:34.752466 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-10 14:46:34.752470 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:02.397) 0:01:43.833 ****** 2026-01-10 14:46:34.752474 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752478 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752481 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752485 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752492 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752496 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752500 | orchestrator | 2026-01-10 14:46:34.752504 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-10 14:46:34.752508 | orchestrator | Saturday 10 January 2026 14:43:39 +0000 (0:00:02.239) 0:01:46.072 ****** 2026-01-10 14:46:34.752512 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752516 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752520 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752525 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752529 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752533 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752538 | orchestrator | 2026-01-10 14:46:34.752542 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-10 14:46:34.752547 | orchestrator | Saturday 10 January 2026 14:43:43 +0000 (0:00:03.520) 0:01:49.593 ****** 2026-01-10 14:46:34.752551 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752555 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752560 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752564 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752568 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752572 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752577 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752581 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752586 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752594 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:46:34.752599 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752603 | orchestrator | 2026-01-10 14:46:34.752607 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-10 14:46:34.752611 | orchestrator | Saturday 10 January 2026 14:43:45 +0000 (0:00:02.017) 0:01:51.611 ****** 2026-01-10 14:46:34.752616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752624 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752636 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752648 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752657 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752669 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752678 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752682 | orchestrator | 2026-01-10 14:46:34.752687 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-10 14:46:34.752691 | orchestrator | Saturday 10 January 2026 14:43:47 +0000 (0:00:01.989) 0:01:53.600 ****** 2026-01-10 14:46:34.752698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752703 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752715 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.752727 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752736 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752747 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.752756 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752761 | orchestrator | 2026-01-10 14:46:34.752765 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-10 14:46:34.752769 | orchestrator | Saturday 10 January 2026 14:43:49 +0000 (0:00:02.370) 0:01:55.971 ****** 2026-01-10 14:46:34.752773 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752785 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752789 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752794 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752800 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752807 | orchestrator | 2026-01-10 14:46:34.752813 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-10 14:46:34.752820 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:03.382) 0:01:59.353 ****** 2026-01-10 14:46:34.752826 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752833 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752840 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752847 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:46:34.752858 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:46:34.752864 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:46:34.752868 | orchestrator | 2026-01-10 14:46:34.752872 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-10 14:46:34.752876 | orchestrator | Saturday 10 January 2026 14:43:57 +0000 (0:00:04.550) 0:02:03.904 ****** 2026-01-10 14:46:34.752880 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752883 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752887 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752891 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752894 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752898 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752902 | orchestrator | 2026-01-10 14:46:34.752906 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-10 14:46:34.752909 | orchestrator | Saturday 10 January 2026 14:44:01 +0000 (0:00:04.490) 0:02:08.395 ****** 2026-01-10 14:46:34.752913 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752917 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752920 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752924 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752928 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752931 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752935 | orchestrator | 2026-01-10 14:46:34.752939 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-10 14:46:34.752943 | orchestrator | Saturday 10 January 2026 14:44:04 +0000 (0:00:02.486) 0:02:10.881 ****** 2026-01-10 14:46:34.752946 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752950 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752954 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752957 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752961 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.752965 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752968 | orchestrator | 2026-01-10 14:46:34.752972 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-10 14:46:34.752976 | orchestrator | Saturday 10 January 2026 14:44:06 +0000 (0:00:01.937) 0:02:12.818 ****** 2026-01-10 14:46:34.752980 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.752983 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.752987 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.752991 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.752994 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.752998 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753002 | orchestrator | 2026-01-10 14:46:34.753005 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-10 14:46:34.753009 | orchestrator | Saturday 10 January 2026 14:44:08 +0000 (0:00:02.156) 0:02:14.975 ****** 2026-01-10 14:46:34.753013 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753017 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753020 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753024 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753028 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753032 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753036 | orchestrator | 2026-01-10 14:46:34.753039 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-10 14:46:34.753043 | orchestrator | Saturday 10 January 2026 14:44:10 +0000 (0:00:02.353) 0:02:17.328 ****** 2026-01-10 14:46:34.753047 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753050 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753054 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753058 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753062 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753065 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753069 | orchestrator | 2026-01-10 14:46:34.753078 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-10 14:46:34.753082 | orchestrator | Saturday 10 January 2026 14:44:13 +0000 (0:00:03.043) 0:02:20.372 ****** 2026-01-10 14:46:34.753086 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753090 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753093 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753097 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753101 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753104 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753108 | orchestrator | 2026-01-10 14:46:34.753112 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-10 14:46:34.753116 | orchestrator | Saturday 10 January 2026 14:44:15 +0000 (0:00:01.868) 0:02:22.240 ****** 2026-01-10 14:46:34.753119 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753123 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753127 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753131 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753135 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753138 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753142 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753146 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753153 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753156 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753160 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:46:34.753164 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753168 | orchestrator | 2026-01-10 14:46:34.753171 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-10 14:46:34.753175 | orchestrator | Saturday 10 January 2026 14:44:18 +0000 (0:00:02.623) 0:02:24.864 ****** 2026-01-10 14:46:34.753179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.753183 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.753205 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-10 14:46:34.753216 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.753224 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.753235 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:46:34.753242 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753246 | orchestrator | 2026-01-10 14:46:34.753250 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-10 14:46:34.753254 | orchestrator | Saturday 10 January 2026 14:44:20 +0000 (0:00:02.267) 0:02:27.132 ****** 2026-01-10 14:46:34.753257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.753267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.753274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-10 14:46:34.753278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.753282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.753290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:46:34.753294 | orchestrator | 2026-01-10 14:46:34.753298 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:46:34.753302 | orchestrator | Saturday 10 January 2026 14:44:25 +0000 (0:00:04.899) 0:02:32.032 ****** 2026-01-10 14:46:34.753305 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:46:34.753309 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:46:34.753313 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:46:34.753317 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:46:34.753320 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:46:34.753324 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:46:34.753328 | orchestrator | 2026-01-10 14:46:34.753331 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-10 14:46:34.753335 | orchestrator | Saturday 10 January 2026 14:44:25 +0000 (0:00:00.453) 0:02:32.485 ****** 2026-01-10 14:46:34.753341 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:34.753345 | orchestrator | 2026-01-10 14:46:34.753348 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-10 14:46:34.753352 | orchestrator | Saturday 10 January 2026 14:44:28 +0000 (0:00:02.436) 0:02:34.921 ****** 2026-01-10 14:46:34.753356 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:34.753360 | orchestrator | 2026-01-10 14:46:34.753363 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-10 14:46:34.753367 | orchestrator | Saturday 10 January 2026 14:44:30 +0000 (0:00:02.046) 0:02:36.968 ****** 2026-01-10 14:46:34.753371 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:34.753375 | orchestrator | 2026-01-10 14:46:34.753378 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753382 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:40.898) 0:03:17.866 ****** 2026-01-10 14:46:34.753386 | orchestrator | 2026-01-10 14:46:34.753389 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753393 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.061) 0:03:17.928 ****** 2026-01-10 14:46:34.753397 | orchestrator | 2026-01-10 14:46:34.753401 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753404 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.216) 0:03:18.144 ****** 2026-01-10 14:46:34.753408 | orchestrator | 2026-01-10 14:46:34.753412 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753415 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.064) 0:03:18.209 ****** 2026-01-10 14:46:34.753419 | orchestrator | 2026-01-10 14:46:34.753425 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753429 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.073) 0:03:18.283 ****** 2026-01-10 14:46:34.753433 | orchestrator | 2026-01-10 14:46:34.753437 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:46:34.753441 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.068) 0:03:18.351 ****** 2026-01-10 14:46:34.753444 | orchestrator | 2026-01-10 14:46:34.753448 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-10 14:46:34.753455 | orchestrator | Saturday 10 January 2026 14:45:11 +0000 (0:00:00.061) 0:03:18.413 ****** 2026-01-10 14:46:34.753459 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:34.753463 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:34.753467 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:34.753471 | orchestrator | 2026-01-10 14:46:34.753474 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-10 14:46:34.753478 | orchestrator | Saturday 10 January 2026 14:45:38 +0000 (0:00:26.113) 0:03:44.527 ****** 2026-01-10 14:46:34.753482 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:46:34.753485 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:46:34.753489 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:46:34.753493 | orchestrator | 2026-01-10 14:46:34.753497 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:34.753500 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:46:34.753505 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:46:34.753509 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:46:34.753513 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:46:34.753517 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:46:34.753521 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:46:34.753524 | orchestrator | 2026-01-10 14:46:34.753528 | orchestrator | 2026-01-10 14:46:34.753532 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:34.753536 | orchestrator | Saturday 10 January 2026 14:46:31 +0000 (0:00:53.526) 0:04:38.053 ****** 2026-01-10 14:46:34.753539 | orchestrator | =============================================================================== 2026-01-10 14:46:34.753543 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.53s 2026-01-10 14:46:34.753547 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.90s 2026-01-10 14:46:34.753550 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.11s 2026-01-10 14:46:34.753554 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.91s 2026-01-10 14:46:34.753558 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.41s 2026-01-10 14:46:34.753562 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.88s 2026-01-10 14:46:34.753565 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.90s 2026-01-10 14:46:34.753569 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.67s 2026-01-10 14:46:34.753573 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.60s 2026-01-10 14:46:34.753576 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.55s 2026-01-10 14:46:34.753582 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.49s 2026-01-10 14:46:34.753586 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.43s 2026-01-10 14:46:34.753590 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 4.09s 2026-01-10 14:46:34.753593 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.07s 2026-01-10 14:46:34.753597 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.03s 2026-01-10 14:46:34.753601 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.98s 2026-01-10 14:46:34.753607 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.97s 2026-01-10 14:46:34.753611 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.95s 2026-01-10 14:46:34.753615 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.52s 2026-01-10 14:46:34.753618 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.38s 2026-01-10 14:46:34.753622 | orchestrator | 2026-01-10 14:46:34 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:34.753667 | orchestrator | 2026-01-10 14:46:34 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:34.754176 | orchestrator | 2026-01-10 14:46:34 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:34.754260 | orchestrator | 2026-01-10 14:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:37.787420 | orchestrator | 2026-01-10 14:46:37 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:37.787700 | orchestrator | 2026-01-10 14:46:37 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:37.788882 | orchestrator | 2026-01-10 14:46:37 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:37.789693 | orchestrator | 2026-01-10 14:46:37 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:37.789734 | orchestrator | 2026-01-10 14:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:40.817321 | orchestrator | 2026-01-10 14:46:40 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:40.817681 | orchestrator | 2026-01-10 14:46:40 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:40.818478 | orchestrator | 2026-01-10 14:46:40 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:40.821145 | orchestrator | 2026-01-10 14:46:40 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:40.821183 | orchestrator | 2026-01-10 14:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:43.855747 | orchestrator | 2026-01-10 14:46:43 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:43.856657 | orchestrator | 2026-01-10 14:46:43 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:43.857821 | orchestrator | 2026-01-10 14:46:43 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:43.858950 | orchestrator | 2026-01-10 14:46:43 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:43.858983 | orchestrator | 2026-01-10 14:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:46.902239 | orchestrator | 2026-01-10 14:46:46 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:46.905358 | orchestrator | 2026-01-10 14:46:46 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:46.907028 | orchestrator | 2026-01-10 14:46:46 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:46.908688 | orchestrator | 2026-01-10 14:46:46 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:46.908885 | orchestrator | 2026-01-10 14:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:49.945116 | orchestrator | 2026-01-10 14:46:49 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:49.945415 | orchestrator | 2026-01-10 14:46:49 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:49.946224 | orchestrator | 2026-01-10 14:46:49 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:49.946678 | orchestrator | 2026-01-10 14:46:49 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:49.946742 | orchestrator | 2026-01-10 14:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:52.981923 | orchestrator | 2026-01-10 14:46:52 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:52.982169 | orchestrator | 2026-01-10 14:46:52 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:52.982771 | orchestrator | 2026-01-10 14:46:52 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:52.985388 | orchestrator | 2026-01-10 14:46:52 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:52.985418 | orchestrator | 2026-01-10 14:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:56.013880 | orchestrator | 2026-01-10 14:46:56 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:56.014773 | orchestrator | 2026-01-10 14:46:56 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:56.015877 | orchestrator | 2026-01-10 14:46:56 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:56.017085 | orchestrator | 2026-01-10 14:46:56 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:56.017141 | orchestrator | 2026-01-10 14:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:59.055023 | orchestrator | 2026-01-10 14:46:59 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:46:59.057424 | orchestrator | 2026-01-10 14:46:59 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:46:59.057572 | orchestrator | 2026-01-10 14:46:59 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:46:59.058425 | orchestrator | 2026-01-10 14:46:59 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:46:59.058441 | orchestrator | 2026-01-10 14:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:02.085820 | orchestrator | 2026-01-10 14:47:02 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:02.086143 | orchestrator | 2026-01-10 14:47:02 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:02.087042 | orchestrator | 2026-01-10 14:47:02 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:02.087659 | orchestrator | 2026-01-10 14:47:02 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:02.087690 | orchestrator | 2026-01-10 14:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:05.112127 | orchestrator | 2026-01-10 14:47:05 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:05.113233 | orchestrator | 2026-01-10 14:47:05 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:05.114521 | orchestrator | 2026-01-10 14:47:05 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:05.115623 | orchestrator | 2026-01-10 14:47:05 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:05.115682 | orchestrator | 2026-01-10 14:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:08.209314 | orchestrator | 2026-01-10 14:47:08 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:08.212636 | orchestrator | 2026-01-10 14:47:08 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:08.220000 | orchestrator | 2026-01-10 14:47:08 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:08.222293 | orchestrator | 2026-01-10 14:47:08 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:08.222374 | orchestrator | 2026-01-10 14:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:11.251197 | orchestrator | 2026-01-10 14:47:11 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:11.251255 | orchestrator | 2026-01-10 14:47:11 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:11.251708 | orchestrator | 2026-01-10 14:47:11 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:11.252530 | orchestrator | 2026-01-10 14:47:11 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:11.252569 | orchestrator | 2026-01-10 14:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:14.289931 | orchestrator | 2026-01-10 14:47:14 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:14.293948 | orchestrator | 2026-01-10 14:47:14 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:14.294636 | orchestrator | 2026-01-10 14:47:14 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:14.298395 | orchestrator | 2026-01-10 14:47:14 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:14.298449 | orchestrator | 2026-01-10 14:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:17.343527 | orchestrator | 2026-01-10 14:47:17 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:17.343604 | orchestrator | 2026-01-10 14:47:17 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:17.343815 | orchestrator | 2026-01-10 14:47:17 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:17.344626 | orchestrator | 2026-01-10 14:47:17 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:17.344645 | orchestrator | 2026-01-10 14:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:20.380773 | orchestrator | 2026-01-10 14:47:20 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:20.381515 | orchestrator | 2026-01-10 14:47:20 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:20.382422 | orchestrator | 2026-01-10 14:47:20 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:20.383407 | orchestrator | 2026-01-10 14:47:20 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:20.383481 | orchestrator | 2026-01-10 14:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:23.419802 | orchestrator | 2026-01-10 14:47:23 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:23.420798 | orchestrator | 2026-01-10 14:47:23 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:23.420874 | orchestrator | 2026-01-10 14:47:23 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:23.422916 | orchestrator | 2026-01-10 14:47:23 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:23.422949 | orchestrator | 2026-01-10 14:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:26.453829 | orchestrator | 2026-01-10 14:47:26 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:26.453979 | orchestrator | 2026-01-10 14:47:26 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:26.454966 | orchestrator | 2026-01-10 14:47:26 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:26.455676 | orchestrator | 2026-01-10 14:47:26 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:26.455702 | orchestrator | 2026-01-10 14:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:29.485743 | orchestrator | 2026-01-10 14:47:29 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:29.485985 | orchestrator | 2026-01-10 14:47:29 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:29.487956 | orchestrator | 2026-01-10 14:47:29 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:29.488373 | orchestrator | 2026-01-10 14:47:29 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:29.488496 | orchestrator | 2026-01-10 14:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:32.522795 | orchestrator | 2026-01-10 14:47:32 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:32.525285 | orchestrator | 2026-01-10 14:47:32 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:32.525359 | orchestrator | 2026-01-10 14:47:32 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:32.526209 | orchestrator | 2026-01-10 14:47:32 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:32.526242 | orchestrator | 2026-01-10 14:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:35.600338 | orchestrator | 2026-01-10 14:47:35 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:35.600963 | orchestrator | 2026-01-10 14:47:35 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:35.600984 | orchestrator | 2026-01-10 14:47:35 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:35.601444 | orchestrator | 2026-01-10 14:47:35 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:35.601461 | orchestrator | 2026-01-10 14:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:38.628690 | orchestrator | 2026-01-10 14:47:38 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:38.629967 | orchestrator | 2026-01-10 14:47:38 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:38.630096 | orchestrator | 2026-01-10 14:47:38 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:38.631049 | orchestrator | 2026-01-10 14:47:38 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:38.631101 | orchestrator | 2026-01-10 14:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:41.671302 | orchestrator | 2026-01-10 14:47:41 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state STARTED 2026-01-10 14:47:41.671523 | orchestrator | 2026-01-10 14:47:41 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:41.672435 | orchestrator | 2026-01-10 14:47:41 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:41.673194 | orchestrator | 2026-01-10 14:47:41 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:41.673559 | orchestrator | 2026-01-10 14:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:44.723499 | orchestrator | 2026-01-10 14:47:44 | INFO  | Task 95461a44-2449-45c1-bdbd-4eaa52ef1920 is in state SUCCESS 2026-01-10 14:47:44.725007 | orchestrator | 2026-01-10 14:47:44.725056 | orchestrator | 2026-01-10 14:47:44.725063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:47:44.725070 | orchestrator | 2026-01-10 14:47:44.725075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:47:44.725080 | orchestrator | Saturday 10 January 2026 14:44:25 +0000 (0:00:00.294) 0:00:00.294 ****** 2026-01-10 14:47:44.725085 | orchestrator | ok: [testbed-manager] 2026-01-10 14:47:44.725104 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:47:44.725109 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:47:44.725124 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:47:44.725129 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:47:44.725134 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:47:44.725138 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:47:44.725143 | orchestrator | 2026-01-10 14:47:44.725148 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:47:44.725154 | orchestrator | Saturday 10 January 2026 14:44:26 +0000 (0:00:00.847) 0:00:01.141 ****** 2026-01-10 14:47:44.725159 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725164 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725169 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725174 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725179 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725183 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725188 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-10 14:47:44.725193 | orchestrator | 2026-01-10 14:47:44.725197 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-10 14:47:44.725245 | orchestrator | 2026-01-10 14:47:44.725250 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:47:44.725255 | orchestrator | Saturday 10 January 2026 14:44:27 +0000 (0:00:00.678) 0:00:01.820 ****** 2026-01-10 14:47:44.725260 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:47:44.725266 | orchestrator | 2026-01-10 14:47:44.725271 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-10 14:47:44.725276 | orchestrator | Saturday 10 January 2026 14:44:28 +0000 (0:00:01.386) 0:00:03.207 ****** 2026-01-10 14:47:44.725291 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:47:44.725311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725322 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725404 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:47:44.725462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725511 | orchestrator | 2026-01-10 14:47:44.725514 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:47:44.725517 | orchestrator | Saturday 10 January 2026 14:44:31 +0000 (0:00:02.692) 0:00:05.899 ****** 2026-01-10 14:47:44.725520 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:47:44.725523 | orchestrator | 2026-01-10 14:47:44.725527 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-10 14:47:44.725530 | orchestrator | Saturday 10 January 2026 14:44:33 +0000 (0:00:01.591) 0:00:07.491 ****** 2026-01-10 14:47:44.725535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725552 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:47:44.725555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725578 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.725583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:47:44.725633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.725643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725647 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.725791 | orchestrator | 2026-01-10 14:47:44.725795 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-10 14:47:44.725798 | orchestrator | Saturday 10 January 2026 14:44:40 +0000 (0:00:07.233) 0:00:14.724 ****** 2026-01-10 14:47:44.725802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:47:44.725806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725816 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:47:44.725822 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725826 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.725841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.725923 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.725927 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.725931 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.725939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725952 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.725955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725968 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.725971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.725975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.725996 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.725999 | orchestrator | 2026-01-10 14:47:44.726003 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-10 14:47:44.726006 | orchestrator | Saturday 10 January 2026 14:44:41 +0000 (0:00:01.601) 0:00:16.325 ****** 2026-01-10 14:47:44.726009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-10 14:47:44.726238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726251 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726273 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726286 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.726292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-10 14:47:44.726301 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726307 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.726312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726352 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.726356 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.726359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:47:44.726381 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.726384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726399 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.726403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:47:44.726406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:47:44.726415 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.726418 | orchestrator | 2026-01-10 14:47:44.726421 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-10 14:47:44.726425 | orchestrator | Saturday 10 January 2026 14:44:44 +0000 (0:00:02.517) 0:00:18.843 ****** 2026-01-10 14:47:44.726428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726432 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:47:44.726435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726473 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.726478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726557 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:47:44.726571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.726615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.726645 | orchestrator | 2026-01-10 14:47:44.726650 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-10 14:47:44.726655 | orchestrator | Saturday 10 January 2026 14:44:51 +0000 (0:00:06.637) 0:00:25.481 ****** 2026-01-10 14:47:44.726660 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:47:44.726665 | orchestrator | 2026-01-10 14:47:44.726670 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-10 14:47:44.726678 | orchestrator | Saturday 10 January 2026 14:44:52 +0000 (0:00:01.406) 0:00:26.888 ****** 2026-01-10 14:47:44.726683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726689 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726715 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726722 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.726727 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726733 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319967, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0268624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726758 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726763 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726790 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726801 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726809 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319991, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0309653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.726816 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726821 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726826 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726834 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726839 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.726991 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727005 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727009 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727015 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727019 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727023 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727033 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727039 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727043 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727049 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727053 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727056 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727060 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727066 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727072 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727076 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319960, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0261123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727081 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727085 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727089 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727092 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727098 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727107 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727144 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727152 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727162 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727168 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727177 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727184 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319982, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.029816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727190 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727200 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727204 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727211 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727215 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727219 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727224 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727247 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727251 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727260 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319957, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0245812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727263 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727269 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727272 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727280 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.727284 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727289 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727293 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727296 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727574 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727585 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727591 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727594 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727601 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727604 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727608 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727613 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727617 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727622 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319970, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0273702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727625 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727630 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727634 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727637 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727645 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727650 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727655 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.727659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727662 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727675 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727678 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319980, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0289829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727695 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727699 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727702 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.727707 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727715 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727722 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.727726 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727729 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727735 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.727739 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727743 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319974, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0280402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727747 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-10 14:47:44.727753 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.727758 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319964, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.026381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727762 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319990, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.030503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727765 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319951, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0235124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320002, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0325236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319987, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0301666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727776 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319958, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.025358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319953, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.023963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319978, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0286295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727791 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319975, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0283954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320000, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0320144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:47:44.727797 | orchestrator | 2026-01-10 14:47:44.727801 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-10 14:47:44.727804 | orchestrator | Saturday 10 January 2026 14:45:20 +0000 (0:00:28.418) 0:00:55.306 ****** 2026-01-10 14:47:44.727807 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:47:44.727810 | orchestrator | 2026-01-10 14:47:44.727813 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-10 14:47:44.727817 | orchestrator | Saturday 10 January 2026 14:45:21 +0000 (0:00:00.684) 0:00:55.991 ****** 2026-01-10 14:47:44.727820 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727827 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727833 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727836 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:47:44.727839 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727845 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727852 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727855 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727863 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727866 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727870 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727873 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727886 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727896 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727900 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727910 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727920 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727933 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727949 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727953 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727958 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727963 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.727968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727972 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-10 14:47:44.727980 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:47:44.727985 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-10 14:47:44.727990 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:47:44.727995 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:47:44.728000 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:47:44.728005 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:47:44.728009 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:47:44.728014 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:47:44.728019 | orchestrator | 2026-01-10 14:47:44.728024 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-10 14:47:44.728028 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:01.682) 0:00:57.674 ****** 2026-01-10 14:47:44.728033 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728038 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728043 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728047 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728052 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728057 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728062 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728067 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728072 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728076 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728081 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:47:44.728086 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728091 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-10 14:47:44.728099 | orchestrator | 2026-01-10 14:47:44.728104 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-10 14:47:44.728108 | orchestrator | Saturday 10 January 2026 14:45:38 +0000 (0:00:15.580) 0:01:13.254 ****** 2026-01-10 14:47:44.728142 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728153 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728157 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728162 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728167 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728172 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728177 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728182 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728186 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728191 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:47:44.728196 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728201 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-10 14:47:44.728205 | orchestrator | 2026-01-10 14:47:44.728210 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-10 14:47:44.728215 | orchestrator | Saturday 10 January 2026 14:45:44 +0000 (0:00:05.226) 0:01:18.481 ****** 2026-01-10 14:47:44.728220 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728334 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728342 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728347 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728351 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728355 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-10 14:47:44.728358 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728362 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728366 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728369 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728373 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728377 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:47:44.728380 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728384 | orchestrator | 2026-01-10 14:47:44.728387 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-10 14:47:44.728394 | orchestrator | Saturday 10 January 2026 14:45:47 +0000 (0:00:03.537) 0:01:22.018 ****** 2026-01-10 14:47:44.728398 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:47:44.728401 | orchestrator | 2026-01-10 14:47:44.728405 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-10 14:47:44.728408 | orchestrator | Saturday 10 January 2026 14:45:48 +0000 (0:00:01.038) 0:01:23.057 ****** 2026-01-10 14:47:44.728415 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728419 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728422 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728426 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728429 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728432 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728435 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728438 | orchestrator | 2026-01-10 14:47:44.728441 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-10 14:47:44.728444 | orchestrator | Saturday 10 January 2026 14:45:49 +0000 (0:00:00.642) 0:01:23.700 ****** 2026-01-10 14:47:44.728447 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728450 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728453 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728456 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728459 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.728462 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.728465 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.728468 | orchestrator | 2026-01-10 14:47:44.728471 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-10 14:47:44.728475 | orchestrator | Saturday 10 January 2026 14:45:51 +0000 (0:00:02.545) 0:01:26.245 ****** 2026-01-10 14:47:44.728478 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728481 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728484 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728487 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728490 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728493 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728496 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728499 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728502 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728505 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728508 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728511 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728514 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:47:44.728517 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728520 | orchestrator | 2026-01-10 14:47:44.728524 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-10 14:47:44.728527 | orchestrator | Saturday 10 January 2026 14:45:53 +0000 (0:00:01.728) 0:01:27.973 ****** 2026-01-10 14:47:44.728530 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728533 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728539 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728542 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728545 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728548 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728551 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728557 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728561 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728598 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:47:44.728602 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728605 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-10 14:47:44.728608 | orchestrator | 2026-01-10 14:47:44.728611 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-10 14:47:44.728614 | orchestrator | Saturday 10 January 2026 14:45:55 +0000 (0:00:01.714) 0:01:29.687 ****** 2026-01-10 14:47:44.728617 | orchestrator | [WARNING]: Skipped 2026-01-10 14:47:44.728621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-10 14:47:44.728624 | orchestrator | due to this access issue: 2026-01-10 14:47:44.728627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-10 14:47:44.728630 | orchestrator | not a directory 2026-01-10 14:47:44.728633 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:47:44.728636 | orchestrator | 2026-01-10 14:47:44.728639 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-10 14:47:44.728642 | orchestrator | Saturday 10 January 2026 14:45:56 +0000 (0:00:01.033) 0:01:30.721 ****** 2026-01-10 14:47:44.728645 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728652 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728655 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728659 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728663 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728666 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728669 | orchestrator | 2026-01-10 14:47:44.728672 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-10 14:47:44.728675 | orchestrator | Saturday 10 January 2026 14:45:57 +0000 (0:00:00.765) 0:01:31.487 ****** 2026-01-10 14:47:44.728678 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728681 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:47:44.728684 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:47:44.728687 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:47:44.728690 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:47:44.728693 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:47:44.728696 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:47:44.728699 | orchestrator | 2026-01-10 14:47:44.728702 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-10 14:47:44.728705 | orchestrator | Saturday 10 January 2026 14:45:57 +0000 (0:00:00.813) 0:01:32.301 ****** 2026-01-10 14:47:44.728709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-10 14:47:44.728719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728730 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:47:44.728751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728791 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-10 14:47:44.728795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:47:44.728816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:47:44.728834 | orchestrator | 2026-01-10 14:47:44.728842 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-10 14:47:44.728847 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:03.695) 0:01:35.996 ****** 2026-01-10 14:47:44.728852 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:47:44.728857 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:47:44.728862 | orchestrator | 2026-01-10 14:47:44.728867 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728872 | orchestrator | Saturday 10 January 2026 14:46:02 +0000 (0:00:01.268) 0:01:37.265 ****** 2026-01-10 14:47:44.728878 | orchestrator | 2026-01-10 14:47:44.728881 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728884 | orchestrator | Saturday 10 January 2026 14:46:02 +0000 (0:00:00.070) 0:01:37.335 ****** 2026-01-10 14:47:44.728887 | orchestrator | 2026-01-10 14:47:44.728890 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728893 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.067) 0:01:37.403 ****** 2026-01-10 14:47:44.728899 | orchestrator | 2026-01-10 14:47:44.728902 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728905 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.065) 0:01:37.468 ****** 2026-01-10 14:47:44.728908 | orchestrator | 2026-01-10 14:47:44.728911 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728914 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.277) 0:01:37.745 ****** 2026-01-10 14:47:44.728917 | orchestrator | 2026-01-10 14:47:44.728921 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728924 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.067) 0:01:37.813 ****** 2026-01-10 14:47:44.728927 | orchestrator | 2026-01-10 14:47:44.728930 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:47:44.728933 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.065) 0:01:37.878 ****** 2026-01-10 14:47:44.728936 | orchestrator | 2026-01-10 14:47:44.728939 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-10 14:47:44.728942 | orchestrator | Saturday 10 January 2026 14:46:03 +0000 (0:00:00.088) 0:01:37.967 ****** 2026-01-10 14:47:44.728945 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:44.728948 | orchestrator | 2026-01-10 14:47:44.728951 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-10 14:47:44.728954 | orchestrator | Saturday 10 January 2026 14:46:23 +0000 (0:00:20.000) 0:01:57.968 ****** 2026-01-10 14:47:44.728957 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:47:44.728961 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.728964 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:47:44.728967 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:47:44.728970 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.728973 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:44.728976 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.728979 | orchestrator | 2026-01-10 14:47:44.728982 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-10 14:47:44.728985 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:14.725) 0:02:12.694 ****** 2026-01-10 14:47:44.728988 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.728991 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.728994 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.728997 | orchestrator | 2026-01-10 14:47:44.729000 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-10 14:47:44.729003 | orchestrator | Saturday 10 January 2026 14:46:49 +0000 (0:00:10.678) 0:02:23.372 ****** 2026-01-10 14:47:44.729006 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.729009 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.729012 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.729015 | orchestrator | 2026-01-10 14:47:44.729018 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-10 14:47:44.729021 | orchestrator | Saturday 10 January 2026 14:46:59 +0000 (0:00:10.735) 0:02:34.107 ****** 2026-01-10 14:47:44.729024 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.729028 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:44.729031 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.729034 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:47:44.729039 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:47:44.729042 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:47:44.729045 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.729048 | orchestrator | 2026-01-10 14:47:44.729051 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-10 14:47:44.729054 | orchestrator | Saturday 10 January 2026 14:47:10 +0000 (0:00:10.372) 0:02:44.480 ****** 2026-01-10 14:47:44.729058 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:44.729061 | orchestrator | 2026-01-10 14:47:44.729064 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-10 14:47:44.729069 | orchestrator | Saturday 10 January 2026 14:47:17 +0000 (0:00:07.500) 0:02:51.980 ****** 2026-01-10 14:47:44.729072 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:47:44.729075 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:47:44.729078 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:47:44.729081 | orchestrator | 2026-01-10 14:47:44.729084 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-10 14:47:44.729087 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:10.787) 0:03:02.767 ****** 2026-01-10 14:47:44.729090 | orchestrator | changed: [testbed-manager] 2026-01-10 14:47:44.729093 | orchestrator | 2026-01-10 14:47:44.729096 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-10 14:47:44.729100 | orchestrator | Saturday 10 January 2026 14:47:33 +0000 (0:00:05.569) 0:03:08.337 ****** 2026-01-10 14:47:44.729103 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:47:44.729107 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:47:44.729122 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:47:44.729127 | orchestrator | 2026-01-10 14:47:44.729131 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:47:44.729142 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:47:44.729149 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:47:44.729154 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:47:44.729159 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:47:44.729164 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:47:44.729168 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:47:44.729173 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:47:44.729178 | orchestrator | 2026-01-10 14:47:44.729183 | orchestrator | 2026-01-10 14:47:44.729188 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:47:44.729193 | orchestrator | Saturday 10 January 2026 14:47:42 +0000 (0:00:08.430) 0:03:16.767 ****** 2026-01-10 14:47:44.729198 | orchestrator | =============================================================================== 2026-01-10 14:47:44.729203 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.42s 2026-01-10 14:47:44.729207 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.00s 2026-01-10 14:47:44.729212 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.58s 2026-01-10 14:47:44.729217 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.73s 2026-01-10 14:47:44.729222 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.79s 2026-01-10 14:47:44.729227 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.74s 2026-01-10 14:47:44.729232 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.68s 2026-01-10 14:47:44.729237 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 10.37s 2026-01-10 14:47:44.729242 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 8.43s 2026-01-10 14:47:44.729247 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.50s 2026-01-10 14:47:44.729252 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.23s 2026-01-10 14:47:44.729260 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.64s 2026-01-10 14:47:44.729266 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.57s 2026-01-10 14:47:44.729270 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.23s 2026-01-10 14:47:44.729275 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.70s 2026-01-10 14:47:44.729280 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.54s 2026-01-10 14:47:44.729285 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.69s 2026-01-10 14:47:44.729290 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.55s 2026-01-10 14:47:44.729295 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.52s 2026-01-10 14:47:44.729300 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.73s 2026-01-10 14:47:44.729308 | orchestrator | 2026-01-10 14:47:44 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:47:44.731575 | orchestrator | 2026-01-10 14:47:44 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:44.733263 | orchestrator | 2026-01-10 14:47:44 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:44.735147 | orchestrator | 2026-01-10 14:47:44 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:44.735183 | orchestrator | 2026-01-10 14:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:47.790158 | orchestrator | 2026-01-10 14:47:47 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:47:47.791199 | orchestrator | 2026-01-10 14:47:47 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:47.792364 | orchestrator | 2026-01-10 14:47:47 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:47.793691 | orchestrator | 2026-01-10 14:47:47 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:47.793797 | orchestrator | 2026-01-10 14:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:50.848149 | orchestrator | 2026-01-10 14:47:50 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:47:50.850383 | orchestrator | 2026-01-10 14:47:50 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:50.852580 | orchestrator | 2026-01-10 14:47:50 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:50.854212 | orchestrator | 2026-01-10 14:47:50 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:50.854284 | orchestrator | 2026-01-10 14:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:53.906345 | orchestrator | 2026-01-10 14:47:53 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:47:53.908556 | orchestrator | 2026-01-10 14:47:53 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:53.910259 | orchestrator | 2026-01-10 14:47:53 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:53.911753 | orchestrator | 2026-01-10 14:47:53 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:53.911812 | orchestrator | 2026-01-10 14:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:56.952603 | orchestrator | 2026-01-10 14:47:56 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:47:56.954451 | orchestrator | 2026-01-10 14:47:56 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:47:56.955939 | orchestrator | 2026-01-10 14:47:56 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:47:56.957648 | orchestrator | 2026-01-10 14:47:56 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:47:56.957730 | orchestrator | 2026-01-10 14:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:00.008357 | orchestrator | 2026-01-10 14:48:00 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:00.009686 | orchestrator | 2026-01-10 14:48:00 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:00.012303 | orchestrator | 2026-01-10 14:48:00 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:00.014121 | orchestrator | 2026-01-10 14:48:00 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:00.014183 | orchestrator | 2026-01-10 14:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:03.069020 | orchestrator | 2026-01-10 14:48:03 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:03.070970 | orchestrator | 2026-01-10 14:48:03 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:03.073277 | orchestrator | 2026-01-10 14:48:03 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:03.075736 | orchestrator | 2026-01-10 14:48:03 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:03.075941 | orchestrator | 2026-01-10 14:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:06.124729 | orchestrator | 2026-01-10 14:48:06 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:06.127307 | orchestrator | 2026-01-10 14:48:06 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:06.129313 | orchestrator | 2026-01-10 14:48:06 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:06.130981 | orchestrator | 2026-01-10 14:48:06 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:06.131030 | orchestrator | 2026-01-10 14:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:09.171059 | orchestrator | 2026-01-10 14:48:09 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:09.172941 | orchestrator | 2026-01-10 14:48:09 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:09.175462 | orchestrator | 2026-01-10 14:48:09 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:09.177247 | orchestrator | 2026-01-10 14:48:09 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:09.177626 | orchestrator | 2026-01-10 14:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:12.227960 | orchestrator | 2026-01-10 14:48:12 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:12.229870 | orchestrator | 2026-01-10 14:48:12 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:12.231463 | orchestrator | 2026-01-10 14:48:12 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:12.233299 | orchestrator | 2026-01-10 14:48:12 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:12.233361 | orchestrator | 2026-01-10 14:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:15.281607 | orchestrator | 2026-01-10 14:48:15 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:15.284039 | orchestrator | 2026-01-10 14:48:15 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:15.285717 | orchestrator | 2026-01-10 14:48:15 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:15.287765 | orchestrator | 2026-01-10 14:48:15 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:15.287810 | orchestrator | 2026-01-10 14:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:18.344675 | orchestrator | 2026-01-10 14:48:18 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:18.347654 | orchestrator | 2026-01-10 14:48:18 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:18.350266 | orchestrator | 2026-01-10 14:48:18 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:18.353484 | orchestrator | 2026-01-10 14:48:18 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:18.353530 | orchestrator | 2026-01-10 14:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:21.409990 | orchestrator | 2026-01-10 14:48:21 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:21.410152 | orchestrator | 2026-01-10 14:48:21 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:21.410883 | orchestrator | 2026-01-10 14:48:21 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:21.411930 | orchestrator | 2026-01-10 14:48:21 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:21.411970 | orchestrator | 2026-01-10 14:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:24.463459 | orchestrator | 2026-01-10 14:48:24 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:24.464681 | orchestrator | 2026-01-10 14:48:24 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:24.465731 | orchestrator | 2026-01-10 14:48:24 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:24.467926 | orchestrator | 2026-01-10 14:48:24 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:24.469120 | orchestrator | 2026-01-10 14:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:27.517338 | orchestrator | 2026-01-10 14:48:27 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:27.518739 | orchestrator | 2026-01-10 14:48:27 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:27.520700 | orchestrator | 2026-01-10 14:48:27 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:27.522053 | orchestrator | 2026-01-10 14:48:27 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:27.522629 | orchestrator | 2026-01-10 14:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:30.571367 | orchestrator | 2026-01-10 14:48:30 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:30.572037 | orchestrator | 2026-01-10 14:48:30 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:30.572778 | orchestrator | 2026-01-10 14:48:30 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:30.573554 | orchestrator | 2026-01-10 14:48:30 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:30.573754 | orchestrator | 2026-01-10 14:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:33.603684 | orchestrator | 2026-01-10 14:48:33 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:33.604539 | orchestrator | 2026-01-10 14:48:33 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:33.606212 | orchestrator | 2026-01-10 14:48:33 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:33.607490 | orchestrator | 2026-01-10 14:48:33 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:33.607667 | orchestrator | 2026-01-10 14:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:36.643855 | orchestrator | 2026-01-10 14:48:36 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:36.644132 | orchestrator | 2026-01-10 14:48:36 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:36.644860 | orchestrator | 2026-01-10 14:48:36 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:36.645647 | orchestrator | 2026-01-10 14:48:36 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:36.645693 | orchestrator | 2026-01-10 14:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:39.688123 | orchestrator | 2026-01-10 14:48:39 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:39.689456 | orchestrator | 2026-01-10 14:48:39 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:39.690627 | orchestrator | 2026-01-10 14:48:39 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:39.691399 | orchestrator | 2026-01-10 14:48:39 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:39.691461 | orchestrator | 2026-01-10 14:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:42.735422 | orchestrator | 2026-01-10 14:48:42 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:42.736988 | orchestrator | 2026-01-10 14:48:42 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:42.739567 | orchestrator | 2026-01-10 14:48:42 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:42.741724 | orchestrator | 2026-01-10 14:48:42 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:42.741827 | orchestrator | 2026-01-10 14:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:45.778325 | orchestrator | 2026-01-10 14:48:45 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:45.780028 | orchestrator | 2026-01-10 14:48:45 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:45.780186 | orchestrator | 2026-01-10 14:48:45 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:45.781572 | orchestrator | 2026-01-10 14:48:45 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:45.781755 | orchestrator | 2026-01-10 14:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:48.831759 | orchestrator | 2026-01-10 14:48:48 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:48.836267 | orchestrator | 2026-01-10 14:48:48 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:48.839817 | orchestrator | 2026-01-10 14:48:48 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:48.842367 | orchestrator | 2026-01-10 14:48:48 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state STARTED 2026-01-10 14:48:48.842412 | orchestrator | 2026-01-10 14:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:51.895492 | orchestrator | 2026-01-10 14:48:51 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:51.898828 | orchestrator | 2026-01-10 14:48:51 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:51.901425 | orchestrator | 2026-01-10 14:48:51 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:51.904695 | orchestrator | 2026-01-10 14:48:51 | INFO  | Task 2d375c6e-10a8-4424-b72c-6bc5a9ad59be is in state SUCCESS 2026-01-10 14:48:51.906239 | orchestrator | 2026-01-10 14:48:51.906277 | orchestrator | 2026-01-10 14:48:51.906285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:48:51.906291 | orchestrator | 2026-01-10 14:48:51.906297 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:48:51.906303 | orchestrator | Saturday 10 January 2026 14:45:57 +0000 (0:00:00.242) 0:00:00.242 ****** 2026-01-10 14:48:51.906309 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:51.906316 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:51.906321 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:51.906326 | orchestrator | 2026-01-10 14:48:51.906332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:48:51.906338 | orchestrator | Saturday 10 January 2026 14:45:58 +0000 (0:00:00.291) 0:00:00.533 ****** 2026-01-10 14:48:51.906343 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-10 14:48:51.906349 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-10 14:48:51.906355 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-10 14:48:51.906360 | orchestrator | 2026-01-10 14:48:51.906366 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-10 14:48:51.906371 | orchestrator | 2026-01-10 14:48:51.906401 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:51.906408 | orchestrator | Saturday 10 January 2026 14:45:58 +0000 (0:00:00.411) 0:00:00.945 ****** 2026-01-10 14:48:51.906414 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:51.906420 | orchestrator | 2026-01-10 14:48:51.906426 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-10 14:48:51.906432 | orchestrator | Saturday 10 January 2026 14:45:59 +0000 (0:00:00.729) 0:00:01.674 ****** 2026-01-10 14:48:51.906437 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-10 14:48:51.906443 | orchestrator | 2026-01-10 14:48:51.906448 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-10 14:48:51.906454 | orchestrator | Saturday 10 January 2026 14:46:02 +0000 (0:00:03.307) 0:00:04.981 ****** 2026-01-10 14:48:51.906460 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-10 14:48:51.906466 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-10 14:48:51.906471 | orchestrator | 2026-01-10 14:48:51.906477 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-10 14:48:51.906483 | orchestrator | Saturday 10 January 2026 14:46:08 +0000 (0:00:06.037) 0:00:11.020 ****** 2026-01-10 14:48:51.906489 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:48:51.906494 | orchestrator | 2026-01-10 14:48:51.906499 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-10 14:48:51.906521 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:03.092) 0:00:14.112 ****** 2026-01-10 14:48:51.906527 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:48:51.906533 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-10 14:48:51.906537 | orchestrator | 2026-01-10 14:48:51.906542 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-10 14:48:51.906548 | orchestrator | Saturday 10 January 2026 14:46:15 +0000 (0:00:03.887) 0:00:17.999 ****** 2026-01-10 14:48:51.906553 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:48:51.906558 | orchestrator | 2026-01-10 14:48:51.906564 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-10 14:48:51.906569 | orchestrator | Saturday 10 January 2026 14:46:18 +0000 (0:00:03.334) 0:00:21.334 ****** 2026-01-10 14:48:51.906575 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-10 14:48:51.906580 | orchestrator | 2026-01-10 14:48:51.906585 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-10 14:48:51.906591 | orchestrator | Saturday 10 January 2026 14:46:23 +0000 (0:00:04.277) 0:00:25.611 ****** 2026-01-10 14:48:51.906638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906668 | orchestrator | 2026-01-10 14:48:51.906673 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:51.906679 | orchestrator | Saturday 10 January 2026 14:46:29 +0000 (0:00:06.648) 0:00:32.260 ****** 2026-01-10 14:48:51.906684 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:51.906690 | orchestrator | 2026-01-10 14:48:51.906695 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-10 14:48:51.906705 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:00.541) 0:00:32.802 ****** 2026-01-10 14:48:51.906710 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:51.906716 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:51.906721 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.906726 | orchestrator | 2026-01-10 14:48:51.906732 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-10 14:48:51.906740 | orchestrator | Saturday 10 January 2026 14:46:34 +0000 (0:00:03.770) 0:00:36.572 ****** 2026-01-10 14:48:51.906745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906762 | orchestrator | 2026-01-10 14:48:51.906767 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-10 14:48:51.906772 | orchestrator | Saturday 10 January 2026 14:46:35 +0000 (0:00:01.467) 0:00:38.040 ****** 2026-01-10 14:48:51.906777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:48:51.906798 | orchestrator | 2026-01-10 14:48:51.906803 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:48:51.906809 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:01.163) 0:00:39.203 ****** 2026-01-10 14:48:51.906815 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:51.906820 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:51.906825 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:51.906830 | orchestrator | 2026-01-10 14:48:51.906835 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-10 14:48:51.906841 | orchestrator | Saturday 10 January 2026 14:46:37 +0000 (0:00:00.670) 0:00:39.874 ****** 2026-01-10 14:48:51.906846 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.906851 | orchestrator | 2026-01-10 14:48:51.906857 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-10 14:48:51.906862 | orchestrator | Saturday 10 January 2026 14:46:37 +0000 (0:00:00.356) 0:00:40.231 ****** 2026-01-10 14:48:51.906868 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.906873 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.906879 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.906885 | orchestrator | 2026-01-10 14:48:51.906891 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:51.906897 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:00.370) 0:00:40.601 ****** 2026-01-10 14:48:51.906903 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:51.906909 | orchestrator | 2026-01-10 14:48:51.906915 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-10 14:48:51.906921 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:00.593) 0:00:41.195 ****** 2026-01-10 14:48:51.906928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.906961 | orchestrator | 2026-01-10 14:48:51.906967 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-10 14:48:51.906973 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:05.026) 0:00:46.221 ****** 2026-01-10 14:48:51.906987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907002 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907015 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907050 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907055 | orchestrator | 2026-01-10 14:48:51.907061 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-10 14:48:51.907066 | orchestrator | Saturday 10 January 2026 14:46:46 +0000 (0:00:02.488) 0:00:48.709 ****** 2026-01-10 14:48:51.907071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907077 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:51.907107 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907112 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907118 | orchestrator | 2026-01-10 14:48:51.907123 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-10 14:48:51.907129 | orchestrator | Saturday 10 January 2026 14:46:48 +0000 (0:00:02.683) 0:00:51.393 ****** 2026-01-10 14:48:51.907135 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907140 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907146 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907152 | orchestrator | 2026-01-10 14:48:51.907158 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-10 14:48:51.907163 | orchestrator | Saturday 10 January 2026 14:46:53 +0000 (0:00:04.633) 0:00:56.026 ****** 2026-01-10 14:48:51.907170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907200 | orchestrator | 2026-01-10 14:48:51.907206 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-10 14:48:51.907212 | orchestrator | Saturday 10 January 2026 14:46:57 +0000 (0:00:04.062) 0:01:00.088 ****** 2026-01-10 14:48:51.907226 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907232 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:51.907237 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:51.907243 | orchestrator | 2026-01-10 14:48:51.907249 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-10 14:48:51.907254 | orchestrator | Saturday 10 January 2026 14:47:05 +0000 (0:00:08.191) 0:01:08.280 ****** 2026-01-10 14:48:51.907259 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907264 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907270 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907275 | orchestrator | 2026-01-10 14:48:51.907280 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-10 14:48:51.907285 | orchestrator | Saturday 10 January 2026 14:47:09 +0000 (0:00:03.763) 0:01:12.044 ****** 2026-01-10 14:48:51.907291 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907296 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907301 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907306 | orchestrator | 2026-01-10 14:48:51.907312 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-10 14:48:51.907317 | orchestrator | Saturday 10 January 2026 14:47:13 +0000 (0:00:03.927) 0:01:15.972 ****** 2026-01-10 14:48:51.907323 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907425 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907434 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907440 | orchestrator | 2026-01-10 14:48:51.907445 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-10 14:48:51.907450 | orchestrator | Saturday 10 January 2026 14:47:19 +0000 (0:00:05.542) 0:01:21.515 ****** 2026-01-10 14:48:51.907456 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907461 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907470 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907475 | orchestrator | 2026-01-10 14:48:51.907481 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-10 14:48:51.907487 | orchestrator | Saturday 10 January 2026 14:47:22 +0000 (0:00:03.401) 0:01:24.916 ****** 2026-01-10 14:48:51.907493 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907499 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907504 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907509 | orchestrator | 2026-01-10 14:48:51.907515 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-10 14:48:51.907520 | orchestrator | Saturday 10 January 2026 14:47:22 +0000 (0:00:00.310) 0:01:25.226 ****** 2026-01-10 14:48:51.907525 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:51.907531 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907536 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:51.907541 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907547 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:51.907552 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907557 | orchestrator | 2026-01-10 14:48:51.907562 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-10 14:48:51.907567 | orchestrator | Saturday 10 January 2026 14:47:26 +0000 (0:00:03.568) 0:01:28.795 ****** 2026-01-10 14:48:51.907572 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:51.907577 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:51.907582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907588 | orchestrator | 2026-01-10 14:48:51.907593 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-10 14:48:51.907598 | orchestrator | Saturday 10 January 2026 14:47:31 +0000 (0:00:05.532) 0:01:34.327 ****** 2026-01-10 14:48:51.907604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:51.907640 | orchestrator | 2026-01-10 14:48:51.907645 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:51.907650 | orchestrator | Saturday 10 January 2026 14:47:38 +0000 (0:00:06.575) 0:01:40.903 ****** 2026-01-10 14:48:51.907656 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:51.907661 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:51.907666 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:51.907671 | orchestrator | 2026-01-10 14:48:51.907676 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-10 14:48:51.907681 | orchestrator | Saturday 10 January 2026 14:47:38 +0000 (0:00:00.258) 0:01:41.161 ****** 2026-01-10 14:48:51.907686 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907691 | orchestrator | 2026-01-10 14:48:51.907696 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-10 14:48:51.907701 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:03.059) 0:01:44.221 ****** 2026-01-10 14:48:51.907706 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907711 | orchestrator | 2026-01-10 14:48:51.907716 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-10 14:48:51.907721 | orchestrator | Saturday 10 January 2026 14:47:44 +0000 (0:00:02.455) 0:01:46.677 ****** 2026-01-10 14:48:51.907727 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907732 | orchestrator | 2026-01-10 14:48:51.907737 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-10 14:48:51.907743 | orchestrator | Saturday 10 January 2026 14:47:46 +0000 (0:00:02.080) 0:01:48.758 ****** 2026-01-10 14:48:51.907749 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907754 | orchestrator | 2026-01-10 14:48:51.907760 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-10 14:48:51.907766 | orchestrator | Saturday 10 January 2026 14:48:17 +0000 (0:00:31.529) 0:02:20.287 ****** 2026-01-10 14:48:51.907771 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907777 | orchestrator | 2026-01-10 14:48:51.907782 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:51.907788 | orchestrator | Saturday 10 January 2026 14:48:19 +0000 (0:00:02.045) 0:02:22.333 ****** 2026-01-10 14:48:51.907794 | orchestrator | 2026-01-10 14:48:51.907804 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:51.907811 | orchestrator | Saturday 10 January 2026 14:48:20 +0000 (0:00:00.305) 0:02:22.638 ****** 2026-01-10 14:48:51.907816 | orchestrator | 2026-01-10 14:48:51.907821 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:51.907827 | orchestrator | Saturday 10 January 2026 14:48:20 +0000 (0:00:00.065) 0:02:22.704 ****** 2026-01-10 14:48:51.907833 | orchestrator | 2026-01-10 14:48:51.907841 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-10 14:48:51.907846 | orchestrator | Saturday 10 January 2026 14:48:20 +0000 (0:00:00.072) 0:02:22.776 ****** 2026-01-10 14:48:51.907851 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:51.907860 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:51.907865 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:51.907871 | orchestrator | 2026-01-10 14:48:51.907876 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:48:51.907882 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:48:51.907888 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:48:51.907893 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:48:51.907898 | orchestrator | 2026-01-10 14:48:51.907903 | orchestrator | 2026-01-10 14:48:51.907908 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:48:51.907914 | orchestrator | Saturday 10 January 2026 14:48:51 +0000 (0:00:30.974) 0:02:53.751 ****** 2026-01-10 14:48:51.907919 | orchestrator | =============================================================================== 2026-01-10 14:48:51.907924 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.53s 2026-01-10 14:48:51.907929 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.97s 2026-01-10 14:48:51.907934 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.19s 2026-01-10 14:48:51.907939 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.65s 2026-01-10 14:48:51.907944 | orchestrator | glance : Check glance containers ---------------------------------------- 6.58s 2026-01-10 14:48:51.907950 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.04s 2026-01-10 14:48:51.907955 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.54s 2026-01-10 14:48:51.907961 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.53s 2026-01-10 14:48:51.907966 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.03s 2026-01-10 14:48:51.907972 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.63s 2026-01-10 14:48:51.907977 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.28s 2026-01-10 14:48:51.907982 | orchestrator | glance : Copying over config.json files for services -------------------- 4.06s 2026-01-10 14:48:51.907987 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.93s 2026-01-10 14:48:51.907992 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.89s 2026-01-10 14:48:51.907997 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.77s 2026-01-10 14:48:51.908002 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.76s 2026-01-10 14:48:51.908008 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.57s 2026-01-10 14:48:51.908013 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.40s 2026-01-10 14:48:51.908019 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.33s 2026-01-10 14:48:51.908024 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.31s 2026-01-10 14:48:51.908047 | orchestrator | 2026-01-10 14:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:54.954279 | orchestrator | 2026-01-10 14:48:54 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:48:54.955747 | orchestrator | 2026-01-10 14:48:54 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:54.957904 | orchestrator | 2026-01-10 14:48:54 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:54.959363 | orchestrator | 2026-01-10 14:48:54 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:54.959482 | orchestrator | 2026-01-10 14:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:58.005352 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:48:58.005780 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:48:58.009659 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:48:58.010834 | orchestrator | 2026-01-10 14:48:58 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:48:58.010882 | orchestrator | 2026-01-10 14:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:01.061738 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:01.062809 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:01.063773 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:01.064657 | orchestrator | 2026-01-10 14:49:01 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:49:01.064715 | orchestrator | 2026-01-10 14:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:04.115699 | orchestrator | 2026-01-10 14:49:04 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:04.117206 | orchestrator | 2026-01-10 14:49:04 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:04.118562 | orchestrator | 2026-01-10 14:49:04 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:04.119780 | orchestrator | 2026-01-10 14:49:04 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:49:04.119854 | orchestrator | 2026-01-10 14:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:07.150280 | orchestrator | 2026-01-10 14:49:07 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:07.150961 | orchestrator | 2026-01-10 14:49:07 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:07.153805 | orchestrator | 2026-01-10 14:49:07 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:07.155848 | orchestrator | 2026-01-10 14:49:07 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state STARTED 2026-01-10 14:49:07.155903 | orchestrator | 2026-01-10 14:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:10.194703 | orchestrator | 2026-01-10 14:49:10 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:10.194799 | orchestrator | 2026-01-10 14:49:10 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:10.194808 | orchestrator | 2026-01-10 14:49:10 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:10.197183 | orchestrator | 2026-01-10 14:49:10 | INFO  | Task 62b7798e-73ee-4a9e-b320-649111de17d4 is in state SUCCESS 2026-01-10 14:49:10.198445 | orchestrator | 2026-01-10 14:49:10.198492 | orchestrator | 2026-01-10 14:49:10.198500 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:49:10.198507 | orchestrator | 2026-01-10 14:49:10.198512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:49:10.198516 | orchestrator | Saturday 10 January 2026 14:46:20 +0000 (0:00:00.273) 0:00:00.273 ****** 2026-01-10 14:49:10.198520 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:49:10.198545 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:49:10.198549 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:49:10.198553 | orchestrator | 2026-01-10 14:49:10.198557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:49:10.198561 | orchestrator | Saturday 10 January 2026 14:46:20 +0000 (0:00:00.305) 0:00:00.579 ****** 2026-01-10 14:49:10.198565 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-10 14:49:10.198570 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-10 14:49:10.198574 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-10 14:49:10.198578 | orchestrator | 2026-01-10 14:49:10.198582 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-10 14:49:10.198585 | orchestrator | 2026-01-10 14:49:10.198589 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:49:10.198593 | orchestrator | Saturday 10 January 2026 14:46:21 +0000 (0:00:00.443) 0:00:01.022 ****** 2026-01-10 14:49:10.198597 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:10.198602 | orchestrator | 2026-01-10 14:49:10.198606 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-10 14:49:10.198610 | orchestrator | Saturday 10 January 2026 14:46:21 +0000 (0:00:00.535) 0:00:01.558 ****** 2026-01-10 14:49:10.198614 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-10 14:49:10.198618 | orchestrator | 2026-01-10 14:49:10.198621 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-10 14:49:10.198625 | orchestrator | Saturday 10 January 2026 14:46:26 +0000 (0:00:04.264) 0:00:05.822 ****** 2026-01-10 14:49:10.198629 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-10 14:49:10.198634 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-10 14:49:10.198637 | orchestrator | 2026-01-10 14:49:10.198641 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-10 14:49:10.198645 | orchestrator | Saturday 10 January 2026 14:46:33 +0000 (0:00:07.439) 0:00:13.262 ****** 2026-01-10 14:49:10.198649 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:49:10.198653 | orchestrator | 2026-01-10 14:49:10.198657 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-10 14:49:10.198661 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:03.284) 0:00:16.546 ****** 2026-01-10 14:49:10.198675 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:49:10.198679 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-10 14:49:10.198683 | orchestrator | 2026-01-10 14:49:10.198686 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-10 14:49:10.198690 | orchestrator | Saturday 10 January 2026 14:46:40 +0000 (0:00:03.826) 0:00:20.373 ****** 2026-01-10 14:49:10.198694 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:49:10.198698 | orchestrator | 2026-01-10 14:49:10.198702 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-10 14:49:10.198705 | orchestrator | Saturday 10 January 2026 14:46:44 +0000 (0:00:03.339) 0:00:23.712 ****** 2026-01-10 14:49:10.198709 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-10 14:49:10.198713 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-10 14:49:10.198717 | orchestrator | 2026-01-10 14:49:10.198721 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-10 14:49:10.198724 | orchestrator | Saturday 10 January 2026 14:46:51 +0000 (0:00:07.531) 0:00:31.243 ****** 2026-01-10 14:49:10.198731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.198755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.198759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.198764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.198929 | orchestrator | 2026-01-10 14:49:10.198936 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:49:10.198943 | orchestrator | Saturday 10 January 2026 14:46:53 +0000 (0:00:02.365) 0:00:33.609 ****** 2026-01-10 14:49:10.198949 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.198978 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.198984 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.198990 | orchestrator | 2026-01-10 14:49:10.198997 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:49:10.199003 | orchestrator | Saturday 10 January 2026 14:46:54 +0000 (0:00:00.255) 0:00:33.865 ****** 2026-01-10 14:49:10.199094 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:10.199102 | orchestrator | 2026-01-10 14:49:10.199108 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-10 14:49:10.199115 | orchestrator | Saturday 10 January 2026 14:46:54 +0000 (0:00:00.649) 0:00:34.515 ****** 2026-01-10 14:49:10.199127 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:49:10.199134 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:49:10.199141 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:49:10.199146 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:49:10.199151 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:49:10.199157 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:49:10.199162 | orchestrator | 2026-01-10 14:49:10.199168 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-10 14:49:10.199174 | orchestrator | Saturday 10 January 2026 14:46:56 +0000 (0:00:01.970) 0:00:36.485 ****** 2026-01-10 14:49:10.199181 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199189 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199209 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199217 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199230 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199237 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-10 14:49:10.199243 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199255 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199259 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199267 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199271 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199275 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-10 14:49:10.199279 | orchestrator | 2026-01-10 14:49:10.199283 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-10 14:49:10.199290 | orchestrator | Saturday 10 January 2026 14:47:00 +0000 (0:00:03.536) 0:00:40.022 ****** 2026-01-10 14:49:10.199295 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:49:10.199299 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:49:10.199303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-10 14:49:10.199307 | orchestrator | 2026-01-10 14:49:10.199314 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-10 14:49:10.199318 | orchestrator | Saturday 10 January 2026 14:47:04 +0000 (0:00:03.966) 0:00:43.989 ****** 2026-01-10 14:49:10.199322 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-10 14:49:10.199326 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-10 14:49:10.199330 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-10 14:49:10.199333 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:49:10.199337 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:49:10.199384 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:49:10.199388 | orchestrator | 2026-01-10 14:49:10.199392 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:49:10.199396 | orchestrator | Saturday 10 January 2026 14:47:08 +0000 (0:00:03.951) 0:00:47.941 ****** 2026-01-10 14:49:10.199400 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:49:10.199403 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:49:10.199407 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:49:10.199411 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:49:10.199415 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:49:10.199418 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:49:10.199422 | orchestrator | 2026-01-10 14:49:10.199426 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-10 14:49:10.199430 | orchestrator | Saturday 10 January 2026 14:47:09 +0000 (0:00:01.105) 0:00:49.046 ****** 2026-01-10 14:49:10.199433 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.199437 | orchestrator | 2026-01-10 14:49:10.199441 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-10 14:49:10.199445 | orchestrator | Saturday 10 January 2026 14:47:09 +0000 (0:00:00.155) 0:00:49.202 ****** 2026-01-10 14:49:10.199449 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.199452 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.199456 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.199460 | orchestrator | 2026-01-10 14:49:10.199463 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:49:10.199467 | orchestrator | Saturday 10 January 2026 14:47:10 +0000 (0:00:00.447) 0:00:49.649 ****** 2026-01-10 14:49:10.199471 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:49:10.199475 | orchestrator | 2026-01-10 14:49:10.199479 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-10 14:49:10.199485 | orchestrator | Saturday 10 January 2026 14:47:11 +0000 (0:00:00.965) 0:00:50.614 ****** 2026-01-10 14:49:10.199491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.199505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.199518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.199526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.199973 | orchestrator | 2026-01-10 14:49:10.199980 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-10 14:49:10.199986 | orchestrator | Saturday 10 January 2026 14:47:15 +0000 (0:00:04.280) 0:00:54.895 ****** 2026-01-10 14:49:10.199993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200046 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.200058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200085 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.200089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200113 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.200116 | orchestrator | 2026-01-10 14:49:10.200120 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-10 14:49:10.200124 | orchestrator | Saturday 10 January 2026 14:47:16 +0000 (0:00:01.200) 0:00:56.096 ****** 2026-01-10 14:49:10.200134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200174 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.200181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200208 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.200214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200257 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.200264 | orchestrator | 2026-01-10 14:49:10.200271 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-10 14:49:10.200280 | orchestrator | Saturday 10 January 2026 14:47:18 +0000 (0:00:01.708) 0:00:57.805 ****** 2026-01-10 14:49:10.200286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200382 | orchestrator | 2026-01-10 14:49:10.200388 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-10 14:49:10.200393 | orchestrator | Saturday 10 January 2026 14:47:22 +0000 (0:00:04.657) 0:01:02.463 ****** 2026-01-10 14:49:10.200405 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:49:10.200411 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:49:10.200417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-10 14:49:10.200423 | orchestrator | 2026-01-10 14:49:10.200429 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-10 14:49:10.200435 | orchestrator | Saturday 10 January 2026 14:47:24 +0000 (0:00:01.627) 0:01:04.091 ****** 2026-01-10 14:49:10.200445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200524 | orchestrator | 2026-01-10 14:49:10.200528 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-10 14:49:10.200532 | orchestrator | Saturday 10 January 2026 14:47:39 +0000 (0:00:14.749) 0:01:18.840 ****** 2026-01-10 14:49:10.200536 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:10.200540 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.200544 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:10.200550 | orchestrator | 2026-01-10 14:49:10.200557 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-10 14:49:10.200566 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:01.782) 0:01:20.623 ****** 2026-01-10 14:49:10.200572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200606 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.200612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200650 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.200660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-10 14:49:10.200668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:49:10.200690 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.200696 | orchestrator | 2026-01-10 14:49:10.200702 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-10 14:49:10.200708 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:00.587) 0:01:21.211 ****** 2026-01-10 14:49:10.200714 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.200720 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.200726 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.200733 | orchestrator | 2026-01-10 14:49:10.200738 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-10 14:49:10.200745 | orchestrator | Saturday 10 January 2026 14:47:41 +0000 (0:00:00.314) 0:01:21.525 ****** 2026-01-10 14:49:10.200753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-10 14:49:10.200787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:49:10.200870 | orchestrator | 2026-01-10 14:49:10.200877 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:49:10.200882 | orchestrator | Saturday 10 January 2026 14:47:44 +0000 (0:00:02.778) 0:01:24.303 ****** 2026-01-10 14:49:10.200885 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.200889 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:49:10.200893 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:49:10.200897 | orchestrator | 2026-01-10 14:49:10.200900 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-10 14:49:10.200904 | orchestrator | Saturday 10 January 2026 14:47:45 +0000 (0:00:00.587) 0:01:24.890 ****** 2026-01-10 14:49:10.200908 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.200912 | orchestrator | 2026-01-10 14:49:10.200915 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-10 14:49:10.200922 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:02.010) 0:01:26.901 ****** 2026-01-10 14:49:10.200926 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.200930 | orchestrator | 2026-01-10 14:49:10.200933 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-10 14:49:10.200937 | orchestrator | Saturday 10 January 2026 14:47:49 +0000 (0:00:02.137) 0:01:29.038 ****** 2026-01-10 14:49:10.200941 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.200945 | orchestrator | 2026-01-10 14:49:10.200949 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:49:10.200952 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:20.262) 0:01:49.301 ****** 2026-01-10 14:49:10.200956 | orchestrator | 2026-01-10 14:49:10.200960 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:49:10.200964 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:00.064) 0:01:49.365 ****** 2026-01-10 14:49:10.200967 | orchestrator | 2026-01-10 14:49:10.200971 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:49:10.200975 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:00.064) 0:01:49.430 ****** 2026-01-10 14:49:10.200978 | orchestrator | 2026-01-10 14:49:10.200982 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-10 14:49:10.200986 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:00.067) 0:01:49.498 ****** 2026-01-10 14:49:10.200990 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.200996 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:10.201002 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:10.201025 | orchestrator | 2026-01-10 14:49:10.201031 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-10 14:49:10.201037 | orchestrator | Saturday 10 January 2026 14:48:34 +0000 (0:00:25.042) 0:02:14.541 ****** 2026-01-10 14:49:10.201042 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:10.201048 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:10.201054 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.201061 | orchestrator | 2026-01-10 14:49:10.201067 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-10 14:49:10.201074 | orchestrator | Saturday 10 January 2026 14:48:44 +0000 (0:00:09.902) 0:02:24.443 ****** 2026-01-10 14:49:10.201080 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.201086 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:10.201093 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:10.201099 | orchestrator | 2026-01-10 14:49:10.201110 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-10 14:49:10.201114 | orchestrator | Saturday 10 January 2026 14:49:02 +0000 (0:00:17.630) 0:02:42.073 ****** 2026-01-10 14:49:10.201118 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:49:10.201122 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:49:10.201126 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:49:10.201129 | orchestrator | 2026-01-10 14:49:10.201133 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-10 14:49:10.201141 | orchestrator | Saturday 10 January 2026 14:49:08 +0000 (0:00:06.188) 0:02:48.262 ****** 2026-01-10 14:49:10.201145 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:49:10.201149 | orchestrator | 2026-01-10 14:49:10.201153 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:49:10.201157 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:49:10.201162 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:49:10.201166 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:49:10.201169 | orchestrator | 2026-01-10 14:49:10.201173 | orchestrator | 2026-01-10 14:49:10.201177 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:49:10.201181 | orchestrator | Saturday 10 January 2026 14:49:08 +0000 (0:00:00.281) 0:02:48.543 ****** 2026-01-10 14:49:10.201184 | orchestrator | =============================================================================== 2026-01-10 14:49:10.201188 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.04s 2026-01-10 14:49:10.201192 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.26s 2026-01-10 14:49:10.201196 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.63s 2026-01-10 14:49:10.201200 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.75s 2026-01-10 14:49:10.201203 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.90s 2026-01-10 14:49:10.201207 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.53s 2026-01-10 14:49:10.201211 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.44s 2026-01-10 14:49:10.201214 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.19s 2026-01-10 14:49:10.201218 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.66s 2026-01-10 14:49:10.201224 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.28s 2026-01-10 14:49:10.201230 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.26s 2026-01-10 14:49:10.201236 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.97s 2026-01-10 14:49:10.201242 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.95s 2026-01-10 14:49:10.201249 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.83s 2026-01-10 14:49:10.201255 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.54s 2026-01-10 14:49:10.201264 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.34s 2026-01-10 14:49:10.201271 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.28s 2026-01-10 14:49:10.201275 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.78s 2026-01-10 14:49:10.201279 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.37s 2026-01-10 14:49:10.201282 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.14s 2026-01-10 14:49:10.201286 | orchestrator | 2026-01-10 14:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:13.246807 | orchestrator | 2026-01-10 14:49:13 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:13.250190 | orchestrator | 2026-01-10 14:49:13 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:13.252031 | orchestrator | 2026-01-10 14:49:13 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:13.252073 | orchestrator | 2026-01-10 14:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:16.297404 | orchestrator | 2026-01-10 14:49:16 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:16.299850 | orchestrator | 2026-01-10 14:49:16 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:16.301249 | orchestrator | 2026-01-10 14:49:16 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:16.301291 | orchestrator | 2026-01-10 14:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:19.346346 | orchestrator | 2026-01-10 14:49:19 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:19.347125 | orchestrator | 2026-01-10 14:49:19 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:19.348355 | orchestrator | 2026-01-10 14:49:19 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:19.348396 | orchestrator | 2026-01-10 14:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:22.399216 | orchestrator | 2026-01-10 14:49:22 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:22.401618 | orchestrator | 2026-01-10 14:49:22 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:22.403818 | orchestrator | 2026-01-10 14:49:22 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:22.403921 | orchestrator | 2026-01-10 14:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:25.451598 | orchestrator | 2026-01-10 14:49:25 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:25.453460 | orchestrator | 2026-01-10 14:49:25 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:25.455883 | orchestrator | 2026-01-10 14:49:25 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:25.455933 | orchestrator | 2026-01-10 14:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:28.503441 | orchestrator | 2026-01-10 14:49:28 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:28.504698 | orchestrator | 2026-01-10 14:49:28 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:28.506622 | orchestrator | 2026-01-10 14:49:28 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:28.506659 | orchestrator | 2026-01-10 14:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:31.551861 | orchestrator | 2026-01-10 14:49:31 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:31.553932 | orchestrator | 2026-01-10 14:49:31 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:31.555752 | orchestrator | 2026-01-10 14:49:31 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:31.556138 | orchestrator | 2026-01-10 14:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:34.598848 | orchestrator | 2026-01-10 14:49:34 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:34.601319 | orchestrator | 2026-01-10 14:49:34 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:34.602183 | orchestrator | 2026-01-10 14:49:34 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:34.602458 | orchestrator | 2026-01-10 14:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:37.648332 | orchestrator | 2026-01-10 14:49:37 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:37.649344 | orchestrator | 2026-01-10 14:49:37 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:37.650158 | orchestrator | 2026-01-10 14:49:37 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:37.650185 | orchestrator | 2026-01-10 14:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:40.690585 | orchestrator | 2026-01-10 14:49:40 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:40.692295 | orchestrator | 2026-01-10 14:49:40 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:40.694182 | orchestrator | 2026-01-10 14:49:40 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:40.694260 | orchestrator | 2026-01-10 14:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:43.740876 | orchestrator | 2026-01-10 14:49:43 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:43.742259 | orchestrator | 2026-01-10 14:49:43 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:43.744335 | orchestrator | 2026-01-10 14:49:43 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:43.744387 | orchestrator | 2026-01-10 14:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:46.799252 | orchestrator | 2026-01-10 14:49:46 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:46.801448 | orchestrator | 2026-01-10 14:49:46 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:46.804264 | orchestrator | 2026-01-10 14:49:46 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:46.804320 | orchestrator | 2026-01-10 14:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:49.854499 | orchestrator | 2026-01-10 14:49:49 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:49.855348 | orchestrator | 2026-01-10 14:49:49 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:49.857138 | orchestrator | 2026-01-10 14:49:49 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:49.857427 | orchestrator | 2026-01-10 14:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:52.908854 | orchestrator | 2026-01-10 14:49:52 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:52.910396 | orchestrator | 2026-01-10 14:49:52 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:52.912867 | orchestrator | 2026-01-10 14:49:52 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:52.913533 | orchestrator | 2026-01-10 14:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:55.965603 | orchestrator | 2026-01-10 14:49:55 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:55.967062 | orchestrator | 2026-01-10 14:49:55 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:55.968563 | orchestrator | 2026-01-10 14:49:55 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:55.968663 | orchestrator | 2026-01-10 14:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:59.011372 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:49:59.013081 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:49:59.016350 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:49:59.016391 | orchestrator | 2026-01-10 14:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:02.064203 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:02.070270 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:02.071609 | orchestrator | 2026-01-10 14:50:02 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:02.071804 | orchestrator | 2026-01-10 14:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:05.127616 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:05.129681 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:05.131183 | orchestrator | 2026-01-10 14:50:05 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:05.131258 | orchestrator | 2026-01-10 14:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:08.172374 | orchestrator | 2026-01-10 14:50:08 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:08.173280 | orchestrator | 2026-01-10 14:50:08 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:08.176120 | orchestrator | 2026-01-10 14:50:08 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:08.176426 | orchestrator | 2026-01-10 14:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:11.236106 | orchestrator | 2026-01-10 14:50:11 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:11.238784 | orchestrator | 2026-01-10 14:50:11 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:11.240179 | orchestrator | 2026-01-10 14:50:11 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:11.240718 | orchestrator | 2026-01-10 14:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:14.290214 | orchestrator | 2026-01-10 14:50:14 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:14.292605 | orchestrator | 2026-01-10 14:50:14 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:14.295012 | orchestrator | 2026-01-10 14:50:14 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:14.295062 | orchestrator | 2026-01-10 14:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:17.340562 | orchestrator | 2026-01-10 14:50:17 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:17.342284 | orchestrator | 2026-01-10 14:50:17 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:17.342334 | orchestrator | 2026-01-10 14:50:17 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:17.342340 | orchestrator | 2026-01-10 14:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:20.386006 | orchestrator | 2026-01-10 14:50:20 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:20.389531 | orchestrator | 2026-01-10 14:50:20 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:20.392938 | orchestrator | 2026-01-10 14:50:20 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:20.393000 | orchestrator | 2026-01-10 14:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:23.439640 | orchestrator | 2026-01-10 14:50:23 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:23.442395 | orchestrator | 2026-01-10 14:50:23 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:23.444742 | orchestrator | 2026-01-10 14:50:23 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:23.444789 | orchestrator | 2026-01-10 14:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:26.494143 | orchestrator | 2026-01-10 14:50:26 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:26.496417 | orchestrator | 2026-01-10 14:50:26 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:26.498736 | orchestrator | 2026-01-10 14:50:26 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:26.499211 | orchestrator | 2026-01-10 14:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:29.552949 | orchestrator | 2026-01-10 14:50:29 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:29.555186 | orchestrator | 2026-01-10 14:50:29 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:29.557225 | orchestrator | 2026-01-10 14:50:29 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:29.557278 | orchestrator | 2026-01-10 14:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:32.628779 | orchestrator | 2026-01-10 14:50:32 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:32.630328 | orchestrator | 2026-01-10 14:50:32 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:32.632529 | orchestrator | 2026-01-10 14:50:32 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:32.632582 | orchestrator | 2026-01-10 14:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:35.683800 | orchestrator | 2026-01-10 14:50:35 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:35.686347 | orchestrator | 2026-01-10 14:50:35 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:35.688299 | orchestrator | 2026-01-10 14:50:35 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:35.688353 | orchestrator | 2026-01-10 14:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:38.742347 | orchestrator | 2026-01-10 14:50:38 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:38.745133 | orchestrator | 2026-01-10 14:50:38 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:38.747576 | orchestrator | 2026-01-10 14:50:38 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:38.747605 | orchestrator | 2026-01-10 14:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:41.798078 | orchestrator | 2026-01-10 14:50:41 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:41.800134 | orchestrator | 2026-01-10 14:50:41 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:41.802255 | orchestrator | 2026-01-10 14:50:41 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:41.802449 | orchestrator | 2026-01-10 14:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:44.848555 | orchestrator | 2026-01-10 14:50:44 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:44.850401 | orchestrator | 2026-01-10 14:50:44 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:44.851726 | orchestrator | 2026-01-10 14:50:44 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:44.851757 | orchestrator | 2026-01-10 14:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:47.897599 | orchestrator | 2026-01-10 14:50:47 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:47.901483 | orchestrator | 2026-01-10 14:50:47 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:47.903465 | orchestrator | 2026-01-10 14:50:47 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:47.903517 | orchestrator | 2026-01-10 14:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:50.954837 | orchestrator | 2026-01-10 14:50:50 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:50.955656 | orchestrator | 2026-01-10 14:50:50 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:50.958293 | orchestrator | 2026-01-10 14:50:50 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:50.958351 | orchestrator | 2026-01-10 14:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:54.012692 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:54.015523 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:54.017967 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:54.018109 | orchestrator | 2026-01-10 14:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:57.066285 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:50:57.068123 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:50:57.071469 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:50:57.071599 | orchestrator | 2026-01-10 14:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:00.120439 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:00.123113 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state STARTED 2026-01-10 14:51:00.125397 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:00.125499 | orchestrator | 2026-01-10 14:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:03.168441 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:03.169491 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:03.170568 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 750666f3-21a8-4127-9c05-ce58fcbc417e is in state SUCCESS 2026-01-10 14:51:03.171559 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:03.171943 | orchestrator | 2026-01-10 14:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:06.211298 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:06.211353 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:06.212879 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:06.212904 | orchestrator | 2026-01-10 14:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:09.264313 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:09.266253 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:09.268470 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:09.268515 | orchestrator | 2026-01-10 14:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:12.325522 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:12.326224 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:12.328579 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:12.328625 | orchestrator | 2026-01-10 14:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:15.375715 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:15.378547 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:15.378603 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:15.378610 | orchestrator | 2026-01-10 14:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:18.417248 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:18.420604 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state STARTED 2026-01-10 14:51:18.422975 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:18.423024 | orchestrator | 2026-01-10 14:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:21.472097 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:21.477908 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 99c0af6e-fdad-465c-a884-3e4df0c80fb8 is in state SUCCESS 2026-01-10 14:51:21.480270 | orchestrator | 2026-01-10 14:51:21.480326 | orchestrator | 2026-01-10 14:51:21.480355 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:51:21.480381 | orchestrator | 2026-01-10 14:51:21.480386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:51:21.480391 | orchestrator | Saturday 10 January 2026 14:47:46 +0000 (0:00:00.186) 0:00:00.186 ****** 2026-01-10 14:51:21.480395 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.480400 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:51:21.480405 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:51:21.480409 | orchestrator | 2026-01-10 14:51:21.480414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:51:21.480418 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:00.319) 0:00:00.506 ****** 2026-01-10 14:51:21.480429 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:51:21.480434 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:51:21.480438 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:51:21.480443 | orchestrator | 2026-01-10 14:51:21.480447 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-10 14:51:21.480451 | orchestrator | 2026-01-10 14:51:21.480456 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-10 14:51:21.480464 | orchestrator | Saturday 10 January 2026 14:47:48 +0000 (0:00:00.804) 0:00:01.311 ****** 2026-01-10 14:51:21.480471 | orchestrator | 2026-01-10 14:51:21.480479 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-10 14:51:21.480486 | orchestrator | 2026-01-10 14:51:21.480493 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-10 14:51:21.480501 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.480508 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:51:21.480518 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:51:21.480527 | orchestrator | 2026-01-10 14:51:21.480534 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:51:21.480542 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:51:21.480551 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:51:21.480558 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:51:21.480565 | orchestrator | 2026-01-10 14:51:21.480572 | orchestrator | 2026-01-10 14:51:21.480579 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:51:21.480587 | orchestrator | Saturday 10 January 2026 14:50:59 +0000 (0:03:11.853) 0:03:13.165 ****** 2026-01-10 14:51:21.480595 | orchestrator | =============================================================================== 2026-01-10 14:51:21.480602 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 191.85s 2026-01-10 14:51:21.480610 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-01-10 14:51:21.480617 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-10 14:51:21.480688 | orchestrator | 2026-01-10 14:51:21.480699 | orchestrator | 2026-01-10 14:51:21.480707 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:51:21.480715 | orchestrator | 2026-01-10 14:51:21.480724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:51:21.480732 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-10 14:51:21.480740 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.480744 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:51:21.480749 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:51:21.480753 | orchestrator | 2026-01-10 14:51:21.480757 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:51:21.480762 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:00.295) 0:00:00.550 ****** 2026-01-10 14:51:21.480773 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-10 14:51:21.480779 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-10 14:51:21.480786 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-10 14:51:21.480793 | orchestrator | 2026-01-10 14:51:21.480800 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-10 14:51:21.480806 | orchestrator | 2026-01-10 14:51:21.480813 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:51:21.480820 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:00.479) 0:00:01.030 ****** 2026-01-10 14:51:21.480843 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:51:21.480909 | orchestrator | 2026-01-10 14:51:21.480914 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-10 14:51:21.480919 | orchestrator | Saturday 10 January 2026 14:48:57 +0000 (0:00:00.546) 0:00:01.577 ****** 2026-01-10 14:51:21.480926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.480947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.480953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.480958 | orchestrator | 2026-01-10 14:51:21.480964 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-10 14:51:21.480969 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:00.663) 0:00:02.241 ****** 2026-01-10 14:51:21.480974 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-10 14:51:21.480979 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-10 14:51:21.480984 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:51:21.480989 | orchestrator | 2026-01-10 14:51:21.480994 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:51:21.480999 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:00.818) 0:00:03.059 ****** 2026-01-10 14:51:21.481004 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:51:21.481014 | orchestrator | 2026-01-10 14:51:21.481018 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-10 14:51:21.481023 | orchestrator | Saturday 10 January 2026 14:48:59 +0000 (0:00:00.714) 0:00:03.774 ****** 2026-01-10 14:51:21.481027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481044 | orchestrator | 2026-01-10 14:51:21.481049 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-10 14:51:21.481053 | orchestrator | Saturday 10 January 2026 14:49:00 +0000 (0:00:01.302) 0:00:05.076 ****** 2026-01-10 14:51:21.481059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481069 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.481097 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.481101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481106 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.481110 | orchestrator | 2026-01-10 14:51:21.481115 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-10 14:51:21.481119 | orchestrator | Saturday 10 January 2026 14:49:01 +0000 (0:00:00.407) 0:00:05.484 ****** 2026-01-10 14:51:21.481124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481128 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.481133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481137 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.481146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-10 14:51:21.481151 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.481155 | orchestrator | 2026-01-10 14:51:21.481162 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-10 14:51:21.481166 | orchestrator | Saturday 10 January 2026 14:49:02 +0000 (0:00:00.773) 0:00:06.258 ****** 2026-01-10 14:51:21.481171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481187 | orchestrator | 2026-01-10 14:51:21.481192 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-10 14:51:21.481196 | orchestrator | Saturday 10 January 2026 14:49:03 +0000 (0:00:01.226) 0:00:07.484 ****** 2026-01-10 14:51:21.481201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.481222 | orchestrator | 2026-01-10 14:51:21.481227 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-10 14:51:21.481259 | orchestrator | Saturday 10 January 2026 14:49:04 +0000 (0:00:01.489) 0:00:08.974 ****** 2026-01-10 14:51:21.481264 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.481269 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.481273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.481277 | orchestrator | 2026-01-10 14:51:21.481282 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-10 14:51:21.481286 | orchestrator | Saturday 10 January 2026 14:49:05 +0000 (0:00:00.554) 0:00:09.529 ****** 2026-01-10 14:51:21.481290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:51:21.481295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:51:21.481299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:51:21.481303 | orchestrator | 2026-01-10 14:51:21.481307 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-10 14:51:21.481312 | orchestrator | Saturday 10 January 2026 14:49:06 +0000 (0:00:01.453) 0:00:10.982 ****** 2026-01-10 14:51:21.481316 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:51:21.481321 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:51:21.481325 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:51:21.481330 | orchestrator | 2026-01-10 14:51:21.481334 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-10 14:51:21.481341 | orchestrator | Saturday 10 January 2026 14:49:08 +0000 (0:00:01.450) 0:00:12.433 ****** 2026-01-10 14:51:21.481349 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:51:21.481356 | orchestrator | 2026-01-10 14:51:21.481372 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-10 14:51:21.481385 | orchestrator | Saturday 10 January 2026 14:49:08 +0000 (0:00:00.767) 0:00:13.201 ****** 2026-01-10 14:51:21.481393 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-10 14:51:21.481399 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-10 14:51:21.481403 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.481422 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:51:21.481427 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:51:21.481431 | orchestrator | 2026-01-10 14:51:21.481447 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-10 14:51:21.481452 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:00.744) 0:00:13.945 ****** 2026-01-10 14:51:21.481456 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.481460 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.481464 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.481469 | orchestrator | 2026-01-10 14:51:21.481473 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-10 14:51:21.481477 | orchestrator | Saturday 10 January 2026 14:49:10 +0000 (0:00:00.511) 0:00:14.457 ****** 2026-01-10 14:51:21.481482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319735, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9506907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319735, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9506907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319735, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9506907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319788, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.966336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319788, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.966336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319788, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.966336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319754, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9556851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319754, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9556851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319754, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9556851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319789, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.967495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319789, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.967495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319789, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.967495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319767, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9594455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319767, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9594455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319767, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9594455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319784, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319784, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319784, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319734, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9481091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319734, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9481091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319734, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9481091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319745, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9516268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319745, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9516268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319745, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9516268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319757, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9560418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319757, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9560418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319757, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9560418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319772, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9607785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319772, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9607785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319772, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9607785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319787, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319787, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319787, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319746, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9550416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.481999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319746, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9550416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319746, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9550416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319778, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319778, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319778, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9641497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319770, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9600432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319770, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9600432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319770, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9600432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319766, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9586983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319766, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9586983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319766, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9586983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319765, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9576263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319765, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9576263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319765, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9576263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319776, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9624505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319776, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9624505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319776, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9624505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319762, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9572878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319762, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9572878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319762, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9572878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319785, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319785, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319785, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.965011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319941, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0204475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319941, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0204475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319941, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0204475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319815, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9975607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319815, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9975607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319815, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9975607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319807, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9726515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319807, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9726515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319807, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9726515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319880, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0008073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319880, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0008073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319796, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9703538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319880, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0008073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319796, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9703538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319917, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319796, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9703538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319917, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319884, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.008305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319917, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319884, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.008305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319920, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319920, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319884, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.008305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319936, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0190034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319936, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0190034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319920, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0105329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319914, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0096817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319914, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0096817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319936, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0190034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319878, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319878, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319914, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0096817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319814, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.975495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319814, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.975495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319878, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319877, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319877, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319814, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.975495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319808, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9744418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319808, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9744418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319877, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9984953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319879, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9994955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319879, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9994955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319808, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9744418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319928, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0180402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319928, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0180402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319879, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9994955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319924, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0124955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319924, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0124955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319928, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0180402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319798, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9710164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319798, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9710164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319924, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0124955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319802, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.971693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319802, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.971693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319909, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0092087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319909, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0092087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319798, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.9710164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319922, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0115445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319922, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0115445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319802, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053531.971693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319909, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0092087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319922, 'dev': 120, 'nlink': 1, 'atime': 1768003362.0, 'mtime': 1768003362.0, 'ctime': 1768053532.0115445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-10 14:51:21.482676 | orchestrator | 2026-01-10 14:51:21.482681 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-10 14:51:21.482686 | orchestrator | Saturday 10 January 2026 14:49:48 +0000 (0:00:38.317) 0:00:52.775 ****** 2026-01-10 14:51:21.482775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.482797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.482807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-10 14:51:21.482812 | orchestrator | 2026-01-10 14:51:21.482816 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-10 14:51:21.482821 | orchestrator | Saturday 10 January 2026 14:49:49 +0000 (0:00:01.052) 0:00:53.828 ****** 2026-01-10 14:51:21.482864 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:51:21.482870 | orchestrator | 2026-01-10 14:51:21.482874 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-10 14:51:21.482878 | orchestrator | Saturday 10 January 2026 14:49:52 +0000 (0:00:02.826) 0:00:56.654 ****** 2026-01-10 14:51:21.482883 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:51:21.482887 | orchestrator | 2026-01-10 14:51:21.482891 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:51:21.482896 | orchestrator | Saturday 10 January 2026 14:49:55 +0000 (0:00:03.254) 0:00:59.909 ****** 2026-01-10 14:51:21.482900 | orchestrator | 2026-01-10 14:51:21.482904 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:51:21.482909 | orchestrator | Saturday 10 January 2026 14:49:55 +0000 (0:00:00.062) 0:00:59.972 ****** 2026-01-10 14:51:21.482913 | orchestrator | 2026-01-10 14:51:21.482917 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:51:21.482922 | orchestrator | Saturday 10 January 2026 14:49:55 +0000 (0:00:00.059) 0:01:00.032 ****** 2026-01-10 14:51:21.482926 | orchestrator | 2026-01-10 14:51:21.482930 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-10 14:51:21.482934 | orchestrator | Saturday 10 January 2026 14:49:56 +0000 (0:00:00.234) 0:01:00.267 ****** 2026-01-10 14:51:21.482939 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.482943 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.482947 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:51:21.482952 | orchestrator | 2026-01-10 14:51:21.482956 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-10 14:51:21.482960 | orchestrator | Saturday 10 January 2026 14:49:57 +0000 (0:00:01.763) 0:01:02.031 ****** 2026-01-10 14:51:21.482965 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.482969 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.482973 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-10 14:51:21.482978 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-10 14:51:21.482982 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-10 14:51:21.482986 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-10 14:51:21.483002 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.483006 | orchestrator | 2026-01-10 14:51:21.483011 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-10 14:51:21.483015 | orchestrator | Saturday 10 January 2026 14:50:49 +0000 (0:00:51.550) 0:01:53.581 ****** 2026-01-10 14:51:21.483020 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.483024 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:51:21.483028 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:51:21.483033 | orchestrator | 2026-01-10 14:51:21.483037 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-10 14:51:21.483041 | orchestrator | Saturday 10 January 2026 14:51:13 +0000 (0:00:23.958) 0:02:17.540 ****** 2026-01-10 14:51:21.483046 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:51:21.483050 | orchestrator | 2026-01-10 14:51:21.483054 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-10 14:51:21.483059 | orchestrator | Saturday 10 January 2026 14:51:15 +0000 (0:00:02.262) 0:02:19.802 ****** 2026-01-10 14:51:21.483063 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.483067 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:51:21.483075 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:51:21.483079 | orchestrator | 2026-01-10 14:51:21.483084 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-10 14:51:21.483088 | orchestrator | Saturday 10 January 2026 14:51:16 +0000 (0:00:00.615) 0:02:20.418 ****** 2026-01-10 14:51:21.483093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-10 14:51:21.483102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-10 14:51:21.483108 | orchestrator | 2026-01-10 14:51:21.483112 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-10 14:51:21.483117 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:02.353) 0:02:22.771 ****** 2026-01-10 14:51:21.483121 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:51:21.483125 | orchestrator | 2026-01-10 14:51:21.483129 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:51:21.483134 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:51:21.483139 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:51:21.483143 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:51:21.483147 | orchestrator | 2026-01-10 14:51:21.483152 | orchestrator | 2026-01-10 14:51:21.483156 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:51:21.483160 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:00.307) 0:02:23.078 ****** 2026-01-10 14:51:21.483165 | orchestrator | =============================================================================== 2026-01-10 14:51:21.483169 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.55s 2026-01-10 14:51:21.483173 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.32s 2026-01-10 14:51:21.483177 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.96s 2026-01-10 14:51:21.483182 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 3.25s 2026-01-10 14:51:21.483191 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.83s 2026-01-10 14:51:21.483195 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.35s 2026-01-10 14:51:21.483199 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-01-10 14:51:21.483203 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2026-01-10 14:51:21.483208 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2026-01-10 14:51:21.483212 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.45s 2026-01-10 14:51:21.483216 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.45s 2026-01-10 14:51:21.483220 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2026-01-10 14:51:21.483225 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.23s 2026-01-10 14:51:21.483251 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2026-01-10 14:51:21.483259 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2026-01-10 14:51:21.483266 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2026-01-10 14:51:21.483274 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-01-10 14:51:21.483281 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-01-10 14:51:21.483288 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2026-01-10 14:51:21.483293 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.66s 2026-01-10 14:51:21.483298 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:21.483302 | orchestrator | 2026-01-10 14:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:24.537054 | orchestrator | 2026-01-10 14:51:24 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:24.538252 | orchestrator | 2026-01-10 14:51:24 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:24.538318 | orchestrator | 2026-01-10 14:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:27.586451 | orchestrator | 2026-01-10 14:51:27 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:27.590261 | orchestrator | 2026-01-10 14:51:27 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:27.590361 | orchestrator | 2026-01-10 14:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:30.625123 | orchestrator | 2026-01-10 14:51:30 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:30.625412 | orchestrator | 2026-01-10 14:51:30 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:30.625459 | orchestrator | 2026-01-10 14:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:33.672139 | orchestrator | 2026-01-10 14:51:33 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:33.674622 | orchestrator | 2026-01-10 14:51:33 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:33.674709 | orchestrator | 2026-01-10 14:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:36.709862 | orchestrator | 2026-01-10 14:51:36 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:36.711766 | orchestrator | 2026-01-10 14:51:36 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:36.712251 | orchestrator | 2026-01-10 14:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:39.758238 | orchestrator | 2026-01-10 14:51:39 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:39.759774 | orchestrator | 2026-01-10 14:51:39 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:39.759873 | orchestrator | 2026-01-10 14:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:42.809869 | orchestrator | 2026-01-10 14:51:42 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:42.810833 | orchestrator | 2026-01-10 14:51:42 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:42.810858 | orchestrator | 2026-01-10 14:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:45.868780 | orchestrator | 2026-01-10 14:51:45 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:45.870963 | orchestrator | 2026-01-10 14:51:45 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:45.871025 | orchestrator | 2026-01-10 14:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:48.917166 | orchestrator | 2026-01-10 14:51:48 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:48.917564 | orchestrator | 2026-01-10 14:51:48 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:48.917923 | orchestrator | 2026-01-10 14:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:51.961185 | orchestrator | 2026-01-10 14:51:51 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:51.962803 | orchestrator | 2026-01-10 14:51:51 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:51.962850 | orchestrator | 2026-01-10 14:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:55.005358 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:55.007419 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:55.007462 | orchestrator | 2026-01-10 14:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:58.056395 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:51:58.056940 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:51:58.056978 | orchestrator | 2026-01-10 14:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:01.095396 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:01.098302 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:01.098382 | orchestrator | 2026-01-10 14:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:04.139682 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:04.140173 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:04.140204 | orchestrator | 2026-01-10 14:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:07.197172 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:07.197237 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:07.197277 | orchestrator | 2026-01-10 14:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:10.349172 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:10.349808 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:10.349830 | orchestrator | 2026-01-10 14:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:13.398676 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:13.400705 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:13.400778 | orchestrator | 2026-01-10 14:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:16.444065 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:16.445409 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:16.445453 | orchestrator | 2026-01-10 14:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:19.482168 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:19.482816 | orchestrator | 2026-01-10 14:52:19 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:19.483081 | orchestrator | 2026-01-10 14:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:22.530388 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:22.533362 | orchestrator | 2026-01-10 14:52:22 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:22.533419 | orchestrator | 2026-01-10 14:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:25.597480 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:25.598629 | orchestrator | 2026-01-10 14:52:25 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:25.598654 | orchestrator | 2026-01-10 14:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:28.666117 | orchestrator | 2026-01-10 14:52:28 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:28.667789 | orchestrator | 2026-01-10 14:52:28 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:28.667836 | orchestrator | 2026-01-10 14:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:31.714478 | orchestrator | 2026-01-10 14:52:31 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:31.714542 | orchestrator | 2026-01-10 14:52:31 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:31.714713 | orchestrator | 2026-01-10 14:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:34.752322 | orchestrator | 2026-01-10 14:52:34 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:34.752915 | orchestrator | 2026-01-10 14:52:34 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:34.752947 | orchestrator | 2026-01-10 14:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:37.795530 | orchestrator | 2026-01-10 14:52:37 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:37.795914 | orchestrator | 2026-01-10 14:52:37 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:37.795947 | orchestrator | 2026-01-10 14:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:40.838753 | orchestrator | 2026-01-10 14:52:40 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:40.840919 | orchestrator | 2026-01-10 14:52:40 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:40.840964 | orchestrator | 2026-01-10 14:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:43.889397 | orchestrator | 2026-01-10 14:52:43 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:43.891321 | orchestrator | 2026-01-10 14:52:43 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:43.891422 | orchestrator | 2026-01-10 14:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:46.942929 | orchestrator | 2026-01-10 14:52:46 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:46.945257 | orchestrator | 2026-01-10 14:52:46 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:46.945305 | orchestrator | 2026-01-10 14:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:50.006976 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:50.011967 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:50.012033 | orchestrator | 2026-01-10 14:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:53.050270 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:53.050325 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:53.050334 | orchestrator | 2026-01-10 14:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:56.095804 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:56.097200 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:56.097292 | orchestrator | 2026-01-10 14:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:59.144204 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:52:59.145811 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:52:59.145862 | orchestrator | 2026-01-10 14:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:02.193361 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:02.194158 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:02.194236 | orchestrator | 2026-01-10 14:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:05.233733 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:05.234188 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:05.234205 | orchestrator | 2026-01-10 14:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:08.288450 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:08.292805 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:08.292862 | orchestrator | 2026-01-10 14:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:11.405358 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:11.410206 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:11.410288 | orchestrator | 2026-01-10 14:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:14.451799 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:14.451864 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:14.452809 | orchestrator | 2026-01-10 14:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:17.499253 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:17.500842 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:17.500881 | orchestrator | 2026-01-10 14:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:20.548513 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:20.552208 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:20.552260 | orchestrator | 2026-01-10 14:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:23.603133 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:23.605358 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:23.605568 | orchestrator | 2026-01-10 14:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:26.654123 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:26.655916 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:26.655961 | orchestrator | 2026-01-10 14:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:29.705750 | orchestrator | 2026-01-10 14:53:29 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:29.706904 | orchestrator | 2026-01-10 14:53:29 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:29.706934 | orchestrator | 2026-01-10 14:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:32.758711 | orchestrator | 2026-01-10 14:53:32 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:32.760533 | orchestrator | 2026-01-10 14:53:32 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:32.760619 | orchestrator | 2026-01-10 14:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:35.815468 | orchestrator | 2026-01-10 14:53:35 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:35.818409 | orchestrator | 2026-01-10 14:53:35 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:35.818504 | orchestrator | 2026-01-10 14:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:38.870377 | orchestrator | 2026-01-10 14:53:38 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:38.871279 | orchestrator | 2026-01-10 14:53:38 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:38.871470 | orchestrator | 2026-01-10 14:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:41.927744 | orchestrator | 2026-01-10 14:53:41 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:41.928655 | orchestrator | 2026-01-10 14:53:41 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:41.928682 | orchestrator | 2026-01-10 14:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:44.983144 | orchestrator | 2026-01-10 14:53:44 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:44.986200 | orchestrator | 2026-01-10 14:53:44 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:44.986254 | orchestrator | 2026-01-10 14:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:48.063008 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:48.065360 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:48.065442 | orchestrator | 2026-01-10 14:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:51.111380 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:51.114253 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:51.114346 | orchestrator | 2026-01-10 14:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:54.164120 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:54.165343 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:54.165380 | orchestrator | 2026-01-10 14:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:57.217026 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:53:57.218781 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:53:57.218813 | orchestrator | 2026-01-10 14:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:00.252464 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:00.255171 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:00.255239 | orchestrator | 2026-01-10 14:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:03.289648 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:03.290169 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:03.290262 | orchestrator | 2026-01-10 14:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:06.326753 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:06.327302 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:06.327335 | orchestrator | 2026-01-10 14:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:09.373640 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:09.373976 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:09.374002 | orchestrator | 2026-01-10 14:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:12.443646 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:12.445544 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:12.445605 | orchestrator | 2026-01-10 14:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:15.526802 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:15.531805 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:15.531866 | orchestrator | 2026-01-10 14:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:18.572786 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:18.572847 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:18.572856 | orchestrator | 2026-01-10 14:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:21.615083 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:21.616523 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:21.616787 | orchestrator | 2026-01-10 14:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:24.665861 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:24.665921 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:24.665930 | orchestrator | 2026-01-10 14:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:27.709651 | orchestrator | 2026-01-10 14:54:27 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:27.710295 | orchestrator | 2026-01-10 14:54:27 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:27.710329 | orchestrator | 2026-01-10 14:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:30.739114 | orchestrator | 2026-01-10 14:54:30 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:30.739464 | orchestrator | 2026-01-10 14:54:30 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:30.739495 | orchestrator | 2026-01-10 14:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:33.797839 | orchestrator | 2026-01-10 14:54:33 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:33.798614 | orchestrator | 2026-01-10 14:54:33 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:33.798749 | orchestrator | 2026-01-10 14:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:36.832458 | orchestrator | 2026-01-10 14:54:36 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:36.833033 | orchestrator | 2026-01-10 14:54:36 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:36.833249 | orchestrator | 2026-01-10 14:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:39.883470 | orchestrator | 2026-01-10 14:54:39 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:39.885016 | orchestrator | 2026-01-10 14:54:39 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:39.885349 | orchestrator | 2026-01-10 14:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:42.938843 | orchestrator | 2026-01-10 14:54:42 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:42.939713 | orchestrator | 2026-01-10 14:54:42 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:42.939774 | orchestrator | 2026-01-10 14:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:45.998832 | orchestrator | 2026-01-10 14:54:45 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:46.002305 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:46.002364 | orchestrator | 2026-01-10 14:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:49.066303 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:49.068811 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:49.068860 | orchestrator | 2026-01-10 14:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:52.122359 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:52.123783 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:52.123861 | orchestrator | 2026-01-10 14:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:55.188754 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:55.193005 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:55.193067 | orchestrator | 2026-01-10 14:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:58.253410 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:54:58.255164 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:54:58.255215 | orchestrator | 2026-01-10 14:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:01.307240 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:01.309224 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:55:01.309260 | orchestrator | 2026-01-10 14:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:04.363156 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:04.365045 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:55:04.365112 | orchestrator | 2026-01-10 14:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:07.426064 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:07.429642 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:55:07.429715 | orchestrator | 2026-01-10 14:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:10.491606 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:10.493684 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:55:10.493797 | orchestrator | 2026-01-10 14:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:13.547344 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:13.549152 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state STARTED 2026-01-10 14:55:13.549207 | orchestrator | 2026-01-10 14:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:16.603534 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:16.609339 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 729922a7-4b21-46d8-9d02-cb4be214eaf4 is in state SUCCESS 2026-01-10 14:55:16.609509 | orchestrator | 2026-01-10 14:55:16.611562 | orchestrator | 2026-01-10 14:55:16.611608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:55:16.611614 | orchestrator | 2026-01-10 14:55:16.611619 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-10 14:55:16.611623 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-01-10 14:55:16.611627 | orchestrator | changed: [testbed-manager] 2026-01-10 14:55:16.611632 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611635 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.611639 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.611643 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.611647 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.611651 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.611655 | orchestrator | 2026-01-10 14:55:16.611659 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:55:16.611662 | orchestrator | Saturday 10 January 2026 14:46:37 +0000 (0:00:00.964) 0:00:01.236 ****** 2026-01-10 14:55:16.611666 | orchestrator | changed: [testbed-manager] 2026-01-10 14:55:16.611670 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611674 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.611678 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.611681 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.611685 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.611689 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.611693 | orchestrator | 2026-01-10 14:55:16.611697 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:55:16.611701 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:00.728) 0:00:01.965 ****** 2026-01-10 14:55:16.611704 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-10 14:55:16.611708 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:55:16.611712 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:55:16.611716 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:55:16.611720 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-10 14:55:16.611723 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-10 14:55:16.611741 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-10 14:55:16.611745 | orchestrator | 2026-01-10 14:55:16.611748 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-10 14:55:16.611752 | orchestrator | 2026-01-10 14:55:16.611756 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:55:16.611766 | orchestrator | Saturday 10 January 2026 14:46:39 +0000 (0:00:01.568) 0:00:03.533 ****** 2026-01-10 14:55:16.611770 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.611774 | orchestrator | 2026-01-10 14:55:16.611778 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-10 14:55:16.611781 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:01.426) 0:00:04.960 ****** 2026-01-10 14:55:16.611785 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-10 14:55:16.611789 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-10 14:55:16.611793 | orchestrator | 2026-01-10 14:55:16.611797 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-10 14:55:16.611801 | orchestrator | Saturday 10 January 2026 14:46:45 +0000 (0:00:04.357) 0:00:09.318 ****** 2026-01-10 14:55:16.611804 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:55:16.611808 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:55:16.611812 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611818 | orchestrator | 2026-01-10 14:55:16.611852 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:55:16.611859 | orchestrator | Saturday 10 January 2026 14:46:49 +0000 (0:00:04.321) 0:00:13.640 ****** 2026-01-10 14:55:16.611865 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611872 | orchestrator | 2026-01-10 14:55:16.611878 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-10 14:55:16.611884 | orchestrator | Saturday 10 January 2026 14:46:50 +0000 (0:00:00.986) 0:00:14.627 ****** 2026-01-10 14:55:16.611890 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611896 | orchestrator | 2026-01-10 14:55:16.611902 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-10 14:55:16.611909 | orchestrator | Saturday 10 January 2026 14:46:52 +0000 (0:00:01.826) 0:00:16.453 ****** 2026-01-10 14:55:16.611915 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611921 | orchestrator | 2026-01-10 14:55:16.611929 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:55:16.611936 | orchestrator | Saturday 10 January 2026 14:46:55 +0000 (0:00:02.412) 0:00:18.866 ****** 2026-01-10 14:55:16.611942 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.611946 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.611950 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.611954 | orchestrator | 2026-01-10 14:55:16.611958 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:55:16.611962 | orchestrator | Saturday 10 January 2026 14:46:55 +0000 (0:00:00.591) 0:00:19.457 ****** 2026-01-10 14:55:16.611966 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.611970 | orchestrator | 2026-01-10 14:55:16.611974 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-10 14:55:16.611977 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:32.760) 0:00:52.218 ****** 2026-01-10 14:55:16.611981 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.611985 | orchestrator | 2026-01-10 14:55:16.611988 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:55:16.611999 | orchestrator | Saturday 10 January 2026 14:47:47 +0000 (0:00:18.911) 0:01:11.129 ****** 2026-01-10 14:55:16.612003 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612007 | orchestrator | 2026-01-10 14:55:16.612010 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:55:16.612014 | orchestrator | Saturday 10 January 2026 14:48:01 +0000 (0:00:14.000) 0:01:25.129 ****** 2026-01-10 14:55:16.612030 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612043 | orchestrator | 2026-01-10 14:55:16.612049 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-10 14:55:16.612055 | orchestrator | Saturday 10 January 2026 14:48:02 +0000 (0:00:01.204) 0:01:26.333 ****** 2026-01-10 14:55:16.612061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612068 | orchestrator | 2026-01-10 14:55:16.612074 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:55:16.612080 | orchestrator | Saturday 10 January 2026 14:48:02 +0000 (0:00:00.463) 0:01:26.796 ****** 2026-01-10 14:55:16.612087 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.612094 | orchestrator | 2026-01-10 14:55:16.612100 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:55:16.612106 | orchestrator | Saturday 10 January 2026 14:48:03 +0000 (0:00:00.499) 0:01:27.296 ****** 2026-01-10 14:55:16.612112 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612119 | orchestrator | 2026-01-10 14:55:16.612126 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:55:16.612131 | orchestrator | Saturday 10 January 2026 14:48:24 +0000 (0:00:20.890) 0:01:48.186 ****** 2026-01-10 14:55:16.612135 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612139 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612143 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612146 | orchestrator | 2026-01-10 14:55:16.612150 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-10 14:55:16.612154 | orchestrator | 2026-01-10 14:55:16.612158 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:55:16.612163 | orchestrator | Saturday 10 January 2026 14:48:24 +0000 (0:00:00.394) 0:01:48.581 ****** 2026-01-10 14:55:16.612167 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.612171 | orchestrator | 2026-01-10 14:55:16.612176 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-10 14:55:16.612180 | orchestrator | Saturday 10 January 2026 14:48:25 +0000 (0:00:00.727) 0:01:49.309 ****** 2026-01-10 14:55:16.612184 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612188 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612193 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612197 | orchestrator | 2026-01-10 14:55:16.612201 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-10 14:55:16.612206 | orchestrator | Saturday 10 January 2026 14:48:27 +0000 (0:00:02.213) 0:01:51.522 ****** 2026-01-10 14:55:16.612210 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612214 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612219 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612223 | orchestrator | 2026-01-10 14:55:16.612227 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:55:16.612234 | orchestrator | Saturday 10 January 2026 14:48:29 +0000 (0:00:02.136) 0:01:53.659 ****** 2026-01-10 14:55:16.612241 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612248 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612255 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612262 | orchestrator | 2026-01-10 14:55:16.612267 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:55:16.612272 | orchestrator | Saturday 10 January 2026 14:48:30 +0000 (0:00:00.360) 0:01:54.020 ****** 2026-01-10 14:55:16.612276 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:55:16.612280 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612285 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:55:16.612289 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612295 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:55:16.612302 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-10 14:55:16.612398 | orchestrator | 2026-01-10 14:55:16.612404 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:55:16.612410 | orchestrator | Saturday 10 January 2026 14:48:39 +0000 (0:00:09.663) 0:02:03.683 ****** 2026-01-10 14:55:16.612418 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612427 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612433 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612439 | orchestrator | 2026-01-10 14:55:16.612445 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:55:16.612450 | orchestrator | Saturday 10 January 2026 14:48:40 +0000 (0:00:00.330) 0:02:04.014 ****** 2026-01-10 14:55:16.612468 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:55:16.612475 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612482 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:55:16.612488 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612495 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:55:16.612501 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612507 | orchestrator | 2026-01-10 14:55:16.612514 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:55:16.612520 | orchestrator | Saturday 10 January 2026 14:48:40 +0000 (0:00:00.575) 0:02:04.589 ****** 2026-01-10 14:55:16.612526 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612533 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612539 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612546 | orchestrator | 2026-01-10 14:55:16.612553 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-10 14:55:16.612559 | orchestrator | Saturday 10 January 2026 14:48:41 +0000 (0:00:00.661) 0:02:05.251 ****** 2026-01-10 14:55:16.612566 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612570 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612585 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612589 | orchestrator | 2026-01-10 14:55:16.612593 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-10 14:55:16.612596 | orchestrator | Saturday 10 January 2026 14:48:42 +0000 (0:00:00.921) 0:02:06.173 ****** 2026-01-10 14:55:16.612600 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612604 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612613 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612617 | orchestrator | 2026-01-10 14:55:16.612621 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-10 14:55:16.612625 | orchestrator | Saturday 10 January 2026 14:48:44 +0000 (0:00:01.978) 0:02:08.152 ****** 2026-01-10 14:55:16.612629 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612634 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612640 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612649 | orchestrator | 2026-01-10 14:55:16.612656 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:55:16.612662 | orchestrator | Saturday 10 January 2026 14:49:04 +0000 (0:00:20.662) 0:02:28.814 ****** 2026-01-10 14:55:16.612668 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612674 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612680 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612686 | orchestrator | 2026-01-10 14:55:16.612692 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:55:16.612699 | orchestrator | Saturday 10 January 2026 14:49:20 +0000 (0:00:15.812) 0:02:44.627 ****** 2026-01-10 14:55:16.612706 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.612712 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612719 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612724 | orchestrator | 2026-01-10 14:55:16.612728 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-10 14:55:16.612733 | orchestrator | Saturday 10 January 2026 14:49:21 +0000 (0:00:00.861) 0:02:45.488 ****** 2026-01-10 14:55:16.612746 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612752 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612758 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.612764 | orchestrator | 2026-01-10 14:55:16.612770 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-10 14:55:16.612776 | orchestrator | Saturday 10 January 2026 14:49:35 +0000 (0:00:13.875) 0:02:59.363 ****** 2026-01-10 14:55:16.612782 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612788 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612794 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612801 | orchestrator | 2026-01-10 14:55:16.612807 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:55:16.612814 | orchestrator | Saturday 10 January 2026 14:49:36 +0000 (0:00:01.033) 0:03:00.397 ****** 2026-01-10 14:55:16.612820 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.612826 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.612833 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.612839 | orchestrator | 2026-01-10 14:55:16.612846 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-10 14:55:16.612852 | orchestrator | 2026-01-10 14:55:16.612859 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:55:16.612866 | orchestrator | Saturday 10 January 2026 14:49:37 +0000 (0:00:00.629) 0:03:01.026 ****** 2026-01-10 14:55:16.612872 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.612879 | orchestrator | 2026-01-10 14:55:16.612885 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-10 14:55:16.612891 | orchestrator | Saturday 10 January 2026 14:49:37 +0000 (0:00:00.619) 0:03:01.645 ****** 2026-01-10 14:55:16.612897 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-10 14:55:16.612904 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-10 14:55:16.612910 | orchestrator | 2026-01-10 14:55:16.612917 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-10 14:55:16.612923 | orchestrator | Saturday 10 January 2026 14:49:42 +0000 (0:00:04.266) 0:03:05.912 ****** 2026-01-10 14:55:16.612930 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-10 14:55:16.612938 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-10 14:55:16.612945 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-10 14:55:16.612952 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-10 14:55:16.612958 | orchestrator | 2026-01-10 14:55:16.612965 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-10 14:55:16.612969 | orchestrator | Saturday 10 January 2026 14:49:49 +0000 (0:00:07.888) 0:03:13.800 ****** 2026-01-10 14:55:16.612975 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:55:16.612981 | orchestrator | 2026-01-10 14:55:16.612987 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-10 14:55:16.612993 | orchestrator | Saturday 10 January 2026 14:49:54 +0000 (0:00:04.343) 0:03:18.143 ****** 2026-01-10 14:55:16.613000 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:55:16.613007 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-10 14:55:16.613012 | orchestrator | 2026-01-10 14:55:16.613016 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-10 14:55:16.613020 | orchestrator | Saturday 10 January 2026 14:49:58 +0000 (0:00:04.667) 0:03:22.810 ****** 2026-01-10 14:55:16.613024 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:55:16.613027 | orchestrator | 2026-01-10 14:55:16.613031 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-10 14:55:16.613047 | orchestrator | Saturday 10 January 2026 14:50:02 +0000 (0:00:03.183) 0:03:25.994 ****** 2026-01-10 14:55:16.613055 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-10 14:55:16.613061 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-10 14:55:16.613067 | orchestrator | 2026-01-10 14:55:16.613100 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:55:16.613113 | orchestrator | Saturday 10 January 2026 14:50:09 +0000 (0:00:07.489) 0:03:33.483 ****** 2026-01-10 14:55:16.613125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613315 | orchestrator | 2026-01-10 14:55:16.613322 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-10 14:55:16.613329 | orchestrator | Saturday 10 January 2026 14:50:10 +0000 (0:00:01.302) 0:03:34.786 ****** 2026-01-10 14:55:16.613335 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.613342 | orchestrator | 2026-01-10 14:55:16.613357 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-10 14:55:16.613363 | orchestrator | Saturday 10 January 2026 14:50:11 +0000 (0:00:00.141) 0:03:34.927 ****** 2026-01-10 14:55:16.613370 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.613385 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.613391 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.613397 | orchestrator | 2026-01-10 14:55:16.613422 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-10 14:55:16.613429 | orchestrator | Saturday 10 January 2026 14:50:11 +0000 (0:00:00.306) 0:03:35.233 ****** 2026-01-10 14:55:16.613440 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:55:16.613447 | orchestrator | 2026-01-10 14:55:16.613452 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-10 14:55:16.613456 | orchestrator | Saturday 10 January 2026 14:50:12 +0000 (0:00:00.966) 0:03:36.200 ****** 2026-01-10 14:55:16.613548 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.613554 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.613561 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.613567 | orchestrator | 2026-01-10 14:55:16.613574 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:55:16.613581 | orchestrator | Saturday 10 January 2026 14:50:12 +0000 (0:00:00.310) 0:03:36.510 ****** 2026-01-10 14:55:16.613588 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.613595 | orchestrator | 2026-01-10 14:55:16.613609 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:55:16.613613 | orchestrator | Saturday 10 January 2026 14:50:13 +0000 (0:00:00.562) 0:03:37.073 ****** 2026-01-10 14:55:16.613622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.613645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.613667 | orchestrator | 2026-01-10 14:55:16.613671 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:55:16.613674 | orchestrator | Saturday 10 January 2026 14:50:16 +0000 (0:00:02.793) 0:03:39.867 ****** 2026-01-10 14:55:16.613679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.613683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.613687 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.613691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.613700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.613705 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.613712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.613719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.613735 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.613742 | orchestrator | 2026-01-10 14:55:16.613748 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:55:16.613754 | orchestrator | Saturday 10 January 2026 14:50:16 +0000 (0:00:00.613) 0:03:40.480 ****** 2026-01-10 14:55:16.613761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.613773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.613779 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.614124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.614142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.614147 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.614151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.614159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.614163 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.614170 | orchestrator | 2026-01-10 14:55:16.614176 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-10 14:55:16.614182 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.867) 0:03:41.348 ****** 2026-01-10 14:55:16.614198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614252 | orchestrator | 2026-01-10 14:55:16.614259 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-10 14:55:16.614266 | orchestrator | Saturday 10 January 2026 14:50:19 +0000 (0:00:02.430) 0:03:43.778 ****** 2026-01-10 14:55:16.614273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614339 | orchestrator | 2026-01-10 14:55:16.614345 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-10 14:55:16.614351 | orchestrator | Saturday 10 January 2026 14:50:25 +0000 (0:00:05.663) 0:03:49.442 ****** 2026-01-10 14:55:16.614357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.614366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.614370 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.614374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.614381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.614401 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.614406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-10 14:55:16.614410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.614414 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.614418 | orchestrator | 2026-01-10 14:55:16.614424 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-10 14:55:16.614428 | orchestrator | Saturday 10 January 2026 14:50:26 +0000 (0:00:00.602) 0:03:50.045 ****** 2026-01-10 14:55:16.614432 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.614436 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.614439 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.614446 | orchestrator | 2026-01-10 14:55:16.614473 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-10 14:55:16.614481 | orchestrator | Saturday 10 January 2026 14:50:27 +0000 (0:00:01.631) 0:03:51.677 ****** 2026-01-10 14:55:16.614487 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.614493 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.614499 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.614505 | orchestrator | 2026-01-10 14:55:16.614511 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-10 14:55:16.614516 | orchestrator | Saturday 10 January 2026 14:50:28 +0000 (0:00:00.327) 0:03:52.004 ****** 2026-01-10 14:55:16.614527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:16.614554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.614578 | orchestrator | 2026-01-10 14:55:16.614585 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:55:16.614590 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:02.064) 0:03:54.068 ****** 2026-01-10 14:55:16.614596 | orchestrator | 2026-01-10 14:55:16.614601 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:55:16.614608 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:00.143) 0:03:54.212 ****** 2026-01-10 14:55:16.614614 | orchestrator | 2026-01-10 14:55:16.614620 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:55:16.614626 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:00.136) 0:03:54.348 ****** 2026-01-10 14:55:16.614632 | orchestrator | 2026-01-10 14:55:16.614638 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-10 14:55:16.614644 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:00.148) 0:03:54.497 ****** 2026-01-10 14:55:16.614650 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.614656 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.614775 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.614785 | orchestrator | 2026-01-10 14:55:16.614810 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-10 14:55:16.614816 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:20.690) 0:04:15.188 ****** 2026-01-10 14:55:16.614822 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.614831 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.614839 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.614845 | orchestrator | 2026-01-10 14:55:16.614851 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-10 14:55:16.614858 | orchestrator | 2026-01-10 14:55:16.614864 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:55:16.614870 | orchestrator | Saturday 10 January 2026 14:51:00 +0000 (0:00:09.617) 0:04:24.805 ****** 2026-01-10 14:55:16.614877 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.614884 | orchestrator | 2026-01-10 14:55:16.614890 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:55:16.614897 | orchestrator | Saturday 10 January 2026 14:51:02 +0000 (0:00:01.302) 0:04:26.108 ****** 2026-01-10 14:55:16.614904 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.614911 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.614924 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.614931 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.614935 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.614939 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.614943 | orchestrator | 2026-01-10 14:55:16.614948 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-10 14:55:16.614952 | orchestrator | Saturday 10 January 2026 14:51:02 +0000 (0:00:00.605) 0:04:26.714 ****** 2026-01-10 14:55:16.614956 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.614964 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.614968 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.614972 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:55:16.614977 | orchestrator | 2026-01-10 14:55:16.614984 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:55:16.614997 | orchestrator | Saturday 10 January 2026 14:51:03 +0000 (0:00:01.083) 0:04:27.797 ****** 2026-01-10 14:55:16.615004 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:55:16.615011 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:55:16.615018 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:55:16.615024 | orchestrator | 2026-01-10 14:55:16.615031 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:55:16.615038 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.668) 0:04:28.466 ****** 2026-01-10 14:55:16.615045 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:55:16.615052 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:55:16.615058 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:55:16.615065 | orchestrator | 2026-01-10 14:55:16.615070 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:55:16.615075 | orchestrator | Saturday 10 January 2026 14:51:06 +0000 (0:00:01.462) 0:04:29.929 ****** 2026-01-10 14:55:16.615083 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-10 14:55:16.615088 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.615094 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-10 14:55:16.615100 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.615106 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-10 14:55:16.615113 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.615119 | orchestrator | 2026-01-10 14:55:16.615125 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-10 14:55:16.615132 | orchestrator | Saturday 10 January 2026 14:51:06 +0000 (0:00:00.606) 0:04:30.535 ****** 2026-01-10 14:55:16.615139 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:55:16.615146 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:55:16.615153 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.615159 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:55:16.615166 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:55:16.615172 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.615178 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:55:16.615185 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:55:16.615189 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:55:16.615193 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:55:16.615197 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.615201 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:55:16.615205 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:55:16.615214 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:55:16.615220 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:55:16.615227 | orchestrator | 2026-01-10 14:55:16.615233 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-10 14:55:16.615240 | orchestrator | Saturday 10 January 2026 14:51:08 +0000 (0:00:01.383) 0:04:31.919 ****** 2026-01-10 14:55:16.615246 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.615251 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.615255 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.615259 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.615263 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.615267 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.615272 | orchestrator | 2026-01-10 14:55:16.615279 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-10 14:55:16.615285 | orchestrator | Saturday 10 January 2026 14:51:09 +0000 (0:00:01.272) 0:04:33.191 ****** 2026-01-10 14:55:16.615291 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.615298 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.615304 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.615310 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.615314 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.615317 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.615321 | orchestrator | 2026-01-10 14:55:16.615325 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:55:16.615329 | orchestrator | Saturday 10 January 2026 14:51:11 +0000 (0:00:01.986) 0:04:35.178 ****** 2026-01-10 14:55:16.615336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615447 | orchestrator | 2026-01-10 14:55:16.615451 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:55:16.615455 | orchestrator | Saturday 10 January 2026 14:51:13 +0000 (0:00:02.190) 0:04:37.368 ****** 2026-01-10 14:55:16.615482 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:16.615486 | orchestrator | 2026-01-10 14:55:16.615490 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:55:16.615494 | orchestrator | Saturday 10 January 2026 14:51:14 +0000 (0:00:01.284) 0:04:38.653 ****** 2026-01-10 14:55:16.615498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.615591 | orchestrator | 2026-01-10 14:55:16.615595 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:55:16.615599 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:03.762) 0:04:42.415 ****** 2026-01-10 14:55:16.615608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.615615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.615619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615623 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.615627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.615631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.615639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615646 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.615650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.615654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.615658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615662 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.615665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.615670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615676 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.615688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.615700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615709 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.615717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.615724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.615730 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.615735 | orchestrator | 2026-01-10 14:55:16.615741 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:55:16.615748 | orchestrator | Saturday 10 January 2026 14:51:20 +0000 (0:00:01.736) 0:04:44.152 ****** 2026-01-10 14:55:16.615754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.615760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.616083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616101 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.616115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.616122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.616143 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.616155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.616164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616168 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.616172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.616178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616185 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.616192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.616198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616210 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.616217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.616231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.616238 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.616244 | orchestrator | 2026-01-10 14:55:16.616251 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:55:16.616256 | orchestrator | Saturday 10 January 2026 14:51:22 +0000 (0:00:02.388) 0:04:46.540 ****** 2026-01-10 14:55:16.616260 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.616264 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.616268 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.616274 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:55:16.616281 | orchestrator | 2026-01-10 14:55:16.616287 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-10 14:55:16.616293 | orchestrator | Saturday 10 January 2026 14:51:23 +0000 (0:00:01.073) 0:04:47.614 ****** 2026-01-10 14:55:16.616300 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:55:16.616306 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:55:16.616313 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:55:16.616319 | orchestrator | 2026-01-10 14:55:16.616325 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-10 14:55:16.616331 | orchestrator | Saturday 10 January 2026 14:51:24 +0000 (0:00:00.963) 0:04:48.578 ****** 2026-01-10 14:55:16.616337 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:55:16.616344 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:55:16.616350 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:55:16.616356 | orchestrator | 2026-01-10 14:55:16.616363 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-10 14:55:16.616369 | orchestrator | Saturday 10 January 2026 14:51:25 +0000 (0:00:00.953) 0:04:49.531 ****** 2026-01-10 14:55:16.616376 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:55:16.616382 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:55:16.616388 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:55:16.616395 | orchestrator | 2026-01-10 14:55:16.616401 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-10 14:55:16.616407 | orchestrator | Saturday 10 January 2026 14:51:26 +0000 (0:00:00.512) 0:04:50.043 ****** 2026-01-10 14:55:16.616413 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:55:16.616419 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:55:16.616426 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:55:16.616432 | orchestrator | 2026-01-10 14:55:16.616438 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-10 14:55:16.616444 | orchestrator | Saturday 10 January 2026 14:51:27 +0000 (0:00:00.842) 0:04:50.886 ****** 2026-01-10 14:55:16.616450 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:55:16.616485 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:55:16.616493 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:55:16.616499 | orchestrator | 2026-01-10 14:55:16.616505 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-10 14:55:16.616511 | orchestrator | Saturday 10 January 2026 14:51:28 +0000 (0:00:01.251) 0:04:52.137 ****** 2026-01-10 14:55:16.616517 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:55:16.616523 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:55:16.616530 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:55:16.616536 | orchestrator | 2026-01-10 14:55:16.616542 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-10 14:55:16.616549 | orchestrator | Saturday 10 January 2026 14:51:29 +0000 (0:00:01.104) 0:04:53.242 ****** 2026-01-10 14:55:16.616555 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:55:16.616561 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:55:16.616568 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:55:16.616574 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-10 14:55:16.616581 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-10 14:55:16.616587 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-10 14:55:16.616593 | orchestrator | 2026-01-10 14:55:16.616599 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-10 14:55:16.616605 | orchestrator | Saturday 10 January 2026 14:51:33 +0000 (0:00:03.978) 0:04:57.220 ****** 2026-01-10 14:55:16.616611 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616617 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.616623 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.616629 | orchestrator | 2026-01-10 14:55:16.616636 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-10 14:55:16.616642 | orchestrator | Saturday 10 January 2026 14:51:33 +0000 (0:00:00.591) 0:04:57.812 ****** 2026-01-10 14:55:16.616649 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616655 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.616662 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.616668 | orchestrator | 2026-01-10 14:55:16.616674 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-10 14:55:16.616680 | orchestrator | Saturday 10 January 2026 14:51:34 +0000 (0:00:00.319) 0:04:58.131 ****** 2026-01-10 14:55:16.616687 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.616693 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.616699 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.616705 | orchestrator | 2026-01-10 14:55:16.616715 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-10 14:55:16.616723 | orchestrator | Saturday 10 January 2026 14:51:35 +0000 (0:00:01.344) 0:04:59.475 ****** 2026-01-10 14:55:16.616733 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:55:16.616741 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:55:16.616748 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-10 14:55:16.616754 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:55:16.616761 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:55:16.616767 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-10 14:55:16.616779 | orchestrator | 2026-01-10 14:55:16.616786 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-10 14:55:16.616792 | orchestrator | Saturday 10 January 2026 14:51:39 +0000 (0:00:03.628) 0:05:03.104 ****** 2026-01-10 14:55:16.616799 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:55:16.616805 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:55:16.616812 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:55:16.616819 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:55:16.616825 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.616832 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:55:16.616838 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.616844 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:55:16.616851 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.616857 | orchestrator | 2026-01-10 14:55:16.616863 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-10 14:55:16.616870 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:03.256) 0:05:06.361 ****** 2026-01-10 14:55:16.616876 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616882 | orchestrator | 2026-01-10 14:55:16.616889 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-10 14:55:16.616895 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:00.152) 0:05:06.513 ****** 2026-01-10 14:55:16.616901 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616909 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.616915 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.616922 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.616928 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.616935 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.616941 | orchestrator | 2026-01-10 14:55:16.616948 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-10 14:55:16.616954 | orchestrator | Saturday 10 January 2026 14:51:43 +0000 (0:00:00.618) 0:05:07.132 ****** 2026-01-10 14:55:16.616961 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:55:16.616967 | orchestrator | 2026-01-10 14:55:16.616973 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-10 14:55:16.616979 | orchestrator | Saturday 10 January 2026 14:51:43 +0000 (0:00:00.703) 0:05:07.835 ****** 2026-01-10 14:55:16.616986 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.616992 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.616998 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617005 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617011 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617017 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617024 | orchestrator | 2026-01-10 14:55:16.617030 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-10 14:55:16.617037 | orchestrator | Saturday 10 January 2026 14:51:44 +0000 (0:00:00.895) 0:05:08.731 ****** 2026-01-10 14:55:16.617044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617182 | orchestrator | 2026-01-10 14:55:16.617188 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-10 14:55:16.617195 | orchestrator | Saturday 10 January 2026 14:51:48 +0000 (0:00:04.115) 0:05:12.847 ****** 2026-01-10 14:55:16.617201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.617338 | orchestrator | 2026-01-10 14:55:16.617344 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-10 14:55:16.617351 | orchestrator | Saturday 10 January 2026 14:51:55 +0000 (0:00:06.253) 0:05:19.100 ****** 2026-01-10 14:55:16.617357 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.617363 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.617369 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617375 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617382 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617388 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617394 | orchestrator | 2026-01-10 14:55:16.617400 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-10 14:55:16.617406 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:01.507) 0:05:20.608 ****** 2026-01-10 14:55:16.617413 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:55:16.617419 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:55:16.617425 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:55:16.617432 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:55:16.617438 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:55:16.617448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:55:16.617455 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:55:16.617489 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617499 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:55:16.617504 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617512 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:55:16.617516 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617520 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:55:16.617523 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:55:16.617527 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:55:16.617531 | orchestrator | 2026-01-10 14:55:16.617535 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-10 14:55:16.617538 | orchestrator | Saturday 10 January 2026 14:52:00 +0000 (0:00:03.792) 0:05:24.401 ****** 2026-01-10 14:55:16.617542 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.617546 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.617549 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617553 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617557 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617560 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617564 | orchestrator | 2026-01-10 14:55:16.617568 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-10 14:55:16.617571 | orchestrator | Saturday 10 January 2026 14:52:01 +0000 (0:00:00.621) 0:05:25.023 ****** 2026-01-10 14:55:16.617575 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:55:16.617579 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:55:16.617583 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:55:16.617590 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:55:16.617594 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617597 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:55:16.617601 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:55:16.617605 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617609 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617612 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617620 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617623 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617627 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:55:16.617631 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617635 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617638 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617642 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617646 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617649 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617653 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:55:16.617657 | orchestrator | 2026-01-10 14:55:16.617661 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-10 14:55:16.617664 | orchestrator | Saturday 10 January 2026 14:52:06 +0000 (0:00:05.570) 0:05:30.594 ****** 2026-01-10 14:55:16.617668 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:55:16.617672 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:55:16.617676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:55:16.617682 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:55:16.617685 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:55:16.617689 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:55:16.617695 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:55:16.617699 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:55:16.617703 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:55:16.617707 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:55:16.617710 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:55:16.617714 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:55:16.617720 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:55:16.617724 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617728 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:55:16.617731 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:55:16.617735 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617739 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:55:16.617743 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:55:16.617746 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617750 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:55:16.617754 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:55:16.617758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:55:16.617761 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:55:16.617765 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:55:16.617769 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:55:16.617772 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:55:16.617776 | orchestrator | 2026-01-10 14:55:16.617780 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-10 14:55:16.617784 | orchestrator | Saturday 10 January 2026 14:52:14 +0000 (0:00:07.330) 0:05:37.924 ****** 2026-01-10 14:55:16.617788 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.617791 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.617795 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617799 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617803 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617806 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617810 | orchestrator | 2026-01-10 14:55:16.617814 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-10 14:55:16.617818 | orchestrator | Saturday 10 January 2026 14:52:14 +0000 (0:00:00.803) 0:05:38.727 ****** 2026-01-10 14:55:16.617822 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.617825 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.617829 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617833 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617837 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617840 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617844 | orchestrator | 2026-01-10 14:55:16.617848 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-10 14:55:16.617852 | orchestrator | Saturday 10 January 2026 14:52:15 +0000 (0:00:00.637) 0:05:39.365 ****** 2026-01-10 14:55:16.617855 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617859 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.617863 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617866 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.617871 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.617877 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.617882 | orchestrator | 2026-01-10 14:55:16.617886 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-10 14:55:16.617890 | orchestrator | Saturday 10 January 2026 14:52:17 +0000 (0:00:02.294) 0:05:41.659 ****** 2026-01-10 14:55:16.617896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617915 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.617919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617934 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.617944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:55:16.617949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:55:16.617953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617957 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.617961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.617965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617971 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.617975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.617981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617987 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.617991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:55:16.617995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:55:16.617999 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618003 | orchestrator | 2026-01-10 14:55:16.618007 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-10 14:55:16.618011 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:01.415) 0:05:43.075 ****** 2026-01-10 14:55:16.618044 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:55:16.618049 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618053 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618056 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:55:16.618060 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618066 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.618072 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:55:16.618082 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618089 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.618096 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:55:16.618103 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618109 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.618116 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:55:16.618122 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618134 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.618141 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:55:16.618145 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:55:16.618149 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618153 | orchestrator | 2026-01-10 14:55:16.618156 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-10 14:55:16.618160 | orchestrator | Saturday 10 January 2026 14:52:20 +0000 (0:00:00.966) 0:05:44.041 ****** 2026-01-10 14:55:16.618164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:16.618306 | orchestrator | 2026-01-10 14:55:16.618310 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:55:16.618314 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:02.729) 0:05:46.771 ****** 2026-01-10 14:55:16.618318 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618322 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.618326 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.618330 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.618335 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.618341 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618345 | orchestrator | 2026-01-10 14:55:16.618349 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618353 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:00.842) 0:05:47.613 ****** 2026-01-10 14:55:16.618357 | orchestrator | 2026-01-10 14:55:16.618361 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618368 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:00.134) 0:05:47.748 ****** 2026-01-10 14:55:16.618372 | orchestrator | 2026-01-10 14:55:16.618376 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618380 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:00.151) 0:05:47.899 ****** 2026-01-10 14:55:16.618383 | orchestrator | 2026-01-10 14:55:16.618387 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618391 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:00.134) 0:05:48.033 ****** 2026-01-10 14:55:16.618395 | orchestrator | 2026-01-10 14:55:16.618399 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618402 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:00.133) 0:05:48.167 ****** 2026-01-10 14:55:16.618406 | orchestrator | 2026-01-10 14:55:16.618410 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:55:16.618414 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:00.132) 0:05:48.299 ****** 2026-01-10 14:55:16.618418 | orchestrator | 2026-01-10 14:55:16.618422 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-10 14:55:16.618426 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:00.342) 0:05:48.642 ****** 2026-01-10 14:55:16.618429 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.618433 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.618437 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.618441 | orchestrator | 2026-01-10 14:55:16.618445 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-10 14:55:16.618449 | orchestrator | Saturday 10 January 2026 14:52:31 +0000 (0:00:06.726) 0:05:55.368 ****** 2026-01-10 14:55:16.618454 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.618472 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.618479 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.618485 | orchestrator | 2026-01-10 14:55:16.618491 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-10 14:55:16.618497 | orchestrator | Saturday 10 January 2026 14:52:43 +0000 (0:00:12.370) 0:06:07.739 ****** 2026-01-10 14:55:16.618504 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.618510 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.618516 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.618522 | orchestrator | 2026-01-10 14:55:16.618528 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-10 14:55:16.618534 | orchestrator | Saturday 10 January 2026 14:53:04 +0000 (0:00:20.744) 0:06:28.483 ****** 2026-01-10 14:55:16.618540 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.618547 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.618553 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.618560 | orchestrator | 2026-01-10 14:55:16.618567 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-10 14:55:16.618573 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:30.696) 0:06:59.180 ****** 2026-01-10 14:55:16.618582 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.618586 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.618590 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.618594 | orchestrator | 2026-01-10 14:55:16.618597 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-10 14:55:16.618601 | orchestrator | Saturday 10 January 2026 14:53:36 +0000 (0:00:00.840) 0:07:00.021 ****** 2026-01-10 14:55:16.618605 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.618609 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.618613 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.618617 | orchestrator | 2026-01-10 14:55:16.618624 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-10 14:55:16.618630 | orchestrator | Saturday 10 January 2026 14:53:37 +0000 (0:00:00.865) 0:07:00.886 ****** 2026-01-10 14:55:16.618636 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:55:16.618648 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:55:16.618656 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:55:16.618662 | orchestrator | 2026-01-10 14:55:16.618672 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-10 14:55:16.618679 | orchestrator | Saturday 10 January 2026 14:54:01 +0000 (0:00:24.103) 0:07:24.990 ****** 2026-01-10 14:55:16.618685 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618691 | orchestrator | 2026-01-10 14:55:16.618697 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-10 14:55:16.618706 | orchestrator | Saturday 10 January 2026 14:54:01 +0000 (0:00:00.139) 0:07:25.129 ****** 2026-01-10 14:55:16.618710 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.618714 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618720 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.618727 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618733 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.618740 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-10 14:55:16.618747 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:55:16.618754 | orchestrator | 2026-01-10 14:55:16.618761 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-10 14:55:16.618767 | orchestrator | Saturday 10 January 2026 14:54:24 +0000 (0:00:22.939) 0:07:48.069 ****** 2026-01-10 14:55:16.618773 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618779 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.618786 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.618793 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618798 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.618802 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.618806 | orchestrator | 2026-01-10 14:55:16.618810 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-10 14:55:16.618816 | orchestrator | Saturday 10 January 2026 14:54:34 +0000 (0:00:10.310) 0:07:58.379 ****** 2026-01-10 14:55:16.618822 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.618829 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.618835 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.618841 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.618847 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.618854 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-01-10 14:55:16.618860 | orchestrator | 2026-01-10 14:55:16.618867 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:55:16.618873 | orchestrator | Saturday 10 January 2026 14:54:38 +0000 (0:00:03.759) 0:08:02.138 ****** 2026-01-10 14:55:16.618880 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:55:16.618886 | orchestrator | 2026-01-10 14:55:16.618893 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:55:16.618899 | orchestrator | Saturday 10 January 2026 14:54:52 +0000 (0:00:14.544) 0:08:16.683 ****** 2026-01-10 14:55:16.618905 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:55:16.618911 | orchestrator | 2026-01-10 14:55:16.618918 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-10 14:55:16.618924 | orchestrator | Saturday 10 January 2026 14:54:54 +0000 (0:00:01.360) 0:08:18.044 ****** 2026-01-10 14:55:16.618930 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.618937 | orchestrator | 2026-01-10 14:55:16.618943 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-10 14:55:16.618949 | orchestrator | Saturday 10 January 2026 14:54:55 +0000 (0:00:01.518) 0:08:19.562 ****** 2026-01-10 14:55:16.618956 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:55:16.618962 | orchestrator | 2026-01-10 14:55:16.618969 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-10 14:55:16.618982 | orchestrator | Saturday 10 January 2026 14:55:07 +0000 (0:00:11.880) 0:08:31.443 ****** 2026-01-10 14:55:16.618988 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:55:16.618995 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:55:16.619001 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:55:16.619007 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:16.619013 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:16.619020 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:16.619026 | orchestrator | 2026-01-10 14:55:16.619033 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-10 14:55:16.619039 | orchestrator | 2026-01-10 14:55:16.619046 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-10 14:55:16.619052 | orchestrator | Saturday 10 January 2026 14:55:09 +0000 (0:00:02.295) 0:08:33.738 ****** 2026-01-10 14:55:16.619059 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:16.619065 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:16.619071 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:16.619077 | orchestrator | 2026-01-10 14:55:16.619084 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-10 14:55:16.619090 | orchestrator | 2026-01-10 14:55:16.619096 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-10 14:55:16.619102 | orchestrator | Saturday 10 January 2026 14:55:11 +0000 (0:00:01.306) 0:08:35.044 ****** 2026-01-10 14:55:16.619108 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.619115 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.619121 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.619127 | orchestrator | 2026-01-10 14:55:16.619133 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-10 14:55:16.619140 | orchestrator | 2026-01-10 14:55:16.619146 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-10 14:55:16.619152 | orchestrator | Saturday 10 January 2026 14:55:11 +0000 (0:00:00.553) 0:08:35.598 ****** 2026-01-10 14:55:16.619158 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-10 14:55:16.619165 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:55:16.619172 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619178 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-10 14:55:16.619184 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-10 14:55:16.619195 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619201 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:55:16.619208 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-10 14:55:16.619214 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:55:16.619226 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619232 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-10 14:55:16.619238 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-10 14:55:16.619244 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619250 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:55:16.619256 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-10 14:55:16.619263 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:55:16.619269 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619275 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-10 14:55:16.619282 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-10 14:55:16.619288 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619294 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:55:16.619301 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-10 14:55:16.619311 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:55:16.619317 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619323 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-10 14:55:16.619333 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-10 14:55:16.619339 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619345 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.619352 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-10 14:55:16.619358 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:55:16.619364 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619370 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-10 14:55:16.619376 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-10 14:55:16.619382 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619389 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.619395 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-10 14:55:16.619401 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:55:16.619407 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:55:16.619413 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-10 14:55:16.619419 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-10 14:55:16.619425 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-10 14:55:16.619432 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.619437 | orchestrator | 2026-01-10 14:55:16.619443 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-10 14:55:16.619449 | orchestrator | 2026-01-10 14:55:16.619453 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-10 14:55:16.619483 | orchestrator | Saturday 10 January 2026 14:55:13 +0000 (0:00:01.415) 0:08:37.013 ****** 2026-01-10 14:55:16.619488 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-10 14:55:16.619492 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-10 14:55:16.619496 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.619499 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-10 14:55:16.619503 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-10 14:55:16.619507 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.619511 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-10 14:55:16.619514 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-10 14:55:16.619518 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.619522 | orchestrator | 2026-01-10 14:55:16.619526 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-10 14:55:16.619530 | orchestrator | 2026-01-10 14:55:16.619533 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-10 14:55:16.619537 | orchestrator | Saturday 10 January 2026 14:55:13 +0000 (0:00:00.800) 0:08:37.813 ****** 2026-01-10 14:55:16.619541 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.619545 | orchestrator | 2026-01-10 14:55:16.619548 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-10 14:55:16.619552 | orchestrator | 2026-01-10 14:55:16.619556 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-10 14:55:16.619559 | orchestrator | Saturday 10 January 2026 14:55:14 +0000 (0:00:00.686) 0:08:38.500 ****** 2026-01-10 14:55:16.619563 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:16.619567 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:16.619571 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:16.619574 | orchestrator | 2026-01-10 14:55:16.619578 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:55:16.619586 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:55:16.619591 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-10 14:55:16.619598 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-10 14:55:16.619602 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-10 14:55:16.619610 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:55:16.619614 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-10 14:55:16.619617 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-10 14:55:16.619621 | orchestrator | 2026-01-10 14:55:16.619625 | orchestrator | 2026-01-10 14:55:16.619629 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:55:16.619632 | orchestrator | Saturday 10 January 2026 14:55:15 +0000 (0:00:00.504) 0:08:39.005 ****** 2026-01-10 14:55:16.619636 | orchestrator | =============================================================================== 2026-01-10 14:55:16.619640 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.76s 2026-01-10 14:55:16.619644 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.70s 2026-01-10 14:55:16.619647 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.10s 2026-01-10 14:55:16.619651 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.94s 2026-01-10 14:55:16.619655 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.89s 2026-01-10 14:55:16.619658 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.74s 2026-01-10 14:55:16.619662 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.69s 2026-01-10 14:55:16.619666 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.66s 2026-01-10 14:55:16.619669 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 18.91s 2026-01-10 14:55:16.619673 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.81s 2026-01-10 14:55:16.619677 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.54s 2026-01-10 14:55:16.619681 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.00s 2026-01-10 14:55:16.619684 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.88s 2026-01-10 14:55:16.619688 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.37s 2026-01-10 14:55:16.619691 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.88s 2026-01-10 14:55:16.619695 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.31s 2026-01-10 14:55:16.619699 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.66s 2026-01-10 14:55:16.619703 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.62s 2026-01-10 14:55:16.619707 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 7.89s 2026-01-10 14:55:16.619710 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.49s 2026-01-10 14:55:16.619717 | orchestrator | 2026-01-10 14:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:19.656210 | orchestrator | 2026-01-10 14:55:19 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:19.656272 | orchestrator | 2026-01-10 14:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:22.701923 | orchestrator | 2026-01-10 14:55:22 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:22.701994 | orchestrator | 2026-01-10 14:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:25.740532 | orchestrator | 2026-01-10 14:55:25 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:25.740595 | orchestrator | 2026-01-10 14:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:28.791189 | orchestrator | 2026-01-10 14:55:28 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:28.792784 | orchestrator | 2026-01-10 14:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:31.842732 | orchestrator | 2026-01-10 14:55:31 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:31.842790 | orchestrator | 2026-01-10 14:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:34.886070 | orchestrator | 2026-01-10 14:55:34 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:34.886116 | orchestrator | 2026-01-10 14:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:37.932077 | orchestrator | 2026-01-10 14:55:37 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:37.932120 | orchestrator | 2026-01-10 14:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:40.978539 | orchestrator | 2026-01-10 14:55:40 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:40.978594 | orchestrator | 2026-01-10 14:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:44.033245 | orchestrator | 2026-01-10 14:55:44 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state STARTED 2026-01-10 14:55:44.033307 | orchestrator | 2026-01-10 14:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:47.081390 | orchestrator | 2026-01-10 14:55:47.081500 | orchestrator | 2026-01-10 14:55:47 | INFO  | Task ea89c399-6bdd-4a82-b931-e6ca7a2e3270 is in state SUCCESS 2026-01-10 14:55:47.083357 | orchestrator | 2026-01-10 14:55:47.083403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:55:47.083420 | orchestrator | 2026-01-10 14:55:47.083425 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:55:47.083428 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-10 14:55:47.083431 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.083435 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:47.083438 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:47.083441 | orchestrator | 2026-01-10 14:55:47.083445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:55:47.083448 | orchestrator | Saturday 10 January 2026 14:51:05 +0000 (0:00:00.313) 0:00:00.569 ****** 2026-01-10 14:55:47.083452 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-10 14:55:47.083455 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-10 14:55:47.083458 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-10 14:55:47.083462 | orchestrator | 2026-01-10 14:55:47.083465 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-10 14:55:47.083468 | orchestrator | 2026-01-10 14:55:47.083471 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.083474 | orchestrator | Saturday 10 January 2026 14:51:05 +0000 (0:00:00.450) 0:00:01.019 ****** 2026-01-10 14:55:47.083490 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:47.083494 | orchestrator | 2026-01-10 14:55:47.083497 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-10 14:55:47.083500 | orchestrator | Saturday 10 January 2026 14:51:06 +0000 (0:00:00.568) 0:00:01.588 ****** 2026-01-10 14:55:47.083504 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-10 14:55:47.083507 | orchestrator | 2026-01-10 14:55:47.083510 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-10 14:55:47.083513 | orchestrator | Saturday 10 January 2026 14:51:09 +0000 (0:00:03.654) 0:00:05.242 ****** 2026-01-10 14:55:47.083516 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-10 14:55:47.083519 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-10 14:55:47.083522 | orchestrator | 2026-01-10 14:55:47.083526 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-10 14:55:47.083529 | orchestrator | Saturday 10 January 2026 14:51:16 +0000 (0:00:06.595) 0:00:11.838 ****** 2026-01-10 14:55:47.083532 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:55:47.083535 | orchestrator | 2026-01-10 14:55:47.083538 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-10 14:55:47.083541 | orchestrator | Saturday 10 January 2026 14:51:19 +0000 (0:00:03.082) 0:00:14.921 ****** 2026-01-10 14:55:47.083544 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:55:47.083547 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:55:47.083550 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:55:47.083553 | orchestrator | 2026-01-10 14:55:47.083556 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-10 14:55:47.083559 | orchestrator | Saturday 10 January 2026 14:51:26 +0000 (0:00:07.099) 0:00:22.020 ****** 2026-01-10 14:55:47.083563 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:55:47.083566 | orchestrator | 2026-01-10 14:55:47.083569 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-10 14:55:47.083572 | orchestrator | Saturday 10 January 2026 14:51:29 +0000 (0:00:03.057) 0:00:25.078 ****** 2026-01-10 14:55:47.083594 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:55:47.083597 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:55:47.083600 | orchestrator | 2026-01-10 14:55:47.083603 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-10 14:55:47.083606 | orchestrator | Saturday 10 January 2026 14:51:38 +0000 (0:00:08.476) 0:00:33.554 ****** 2026-01-10 14:55:47.083609 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-10 14:55:47.083612 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-10 14:55:47.083615 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-10 14:55:47.083619 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-10 14:55:47.083622 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-10 14:55:47.083628 | orchestrator | 2026-01-10 14:55:47.083633 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.083638 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:16.192) 0:00:49.747 ****** 2026-01-10 14:55:47.083651 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:47.083659 | orchestrator | 2026-01-10 14:55:47.083664 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-10 14:55:47.083670 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:00.604) 0:00:50.351 ****** 2026-01-10 14:55:47.083679 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083684 | orchestrator | 2026-01-10 14:55:47.083690 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-10 14:55:47.083695 | orchestrator | Saturday 10 January 2026 14:51:59 +0000 (0:00:04.665) 0:00:55.017 ****** 2026-01-10 14:55:47.083700 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083703 | orchestrator | 2026-01-10 14:55:47.083706 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:55:47.083719 | orchestrator | Saturday 10 January 2026 14:52:05 +0000 (0:00:05.765) 0:01:00.782 ****** 2026-01-10 14:55:47.083725 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.083730 | orchestrator | 2026-01-10 14:55:47.083735 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-10 14:55:47.083741 | orchestrator | Saturday 10 January 2026 14:52:08 +0000 (0:00:03.022) 0:01:03.805 ****** 2026-01-10 14:55:47.083744 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:55:47.083748 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:55:47.083753 | orchestrator | 2026-01-10 14:55:47.083757 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-10 14:55:47.083763 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:11.123) 0:01:14.928 ****** 2026-01-10 14:55:47.083768 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-10 14:55:47.083773 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-10 14:55:47.083780 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-10 14:55:47.083786 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-10 14:55:47.083791 | orchestrator | 2026-01-10 14:55:47.083796 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-10 14:55:47.083802 | orchestrator | Saturday 10 January 2026 14:52:34 +0000 (0:00:14.966) 0:01:29.895 ****** 2026-01-10 14:55:47.083807 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083812 | orchestrator | 2026-01-10 14:55:47.083817 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-10 14:55:47.083830 | orchestrator | Saturday 10 January 2026 14:52:39 +0000 (0:00:05.111) 0:01:35.007 ****** 2026-01-10 14:55:47.083836 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083846 | orchestrator | 2026-01-10 14:55:47.083851 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-10 14:55:47.083856 | orchestrator | Saturday 10 January 2026 14:52:44 +0000 (0:00:04.826) 0:01:39.834 ****** 2026-01-10 14:55:47.083862 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.083867 | orchestrator | 2026-01-10 14:55:47.083873 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-10 14:55:47.083878 | orchestrator | Saturday 10 January 2026 14:52:44 +0000 (0:00:00.223) 0:01:40.057 ****** 2026-01-10 14:55:47.083883 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.083888 | orchestrator | 2026-01-10 14:55:47.083893 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.083899 | orchestrator | Saturday 10 January 2026 14:52:48 +0000 (0:00:04.195) 0:01:44.253 ****** 2026-01-10 14:55:47.083904 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-01-10 14:55:47.083909 | orchestrator | 2026-01-10 14:55:47.083915 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-10 14:55:47.083918 | orchestrator | Saturday 10 January 2026 14:52:50 +0000 (0:00:01.263) 0:01:45.517 ****** 2026-01-10 14:55:47.083921 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.083924 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.083931 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083937 | orchestrator | 2026-01-10 14:55:47.083942 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-10 14:55:47.083948 | orchestrator | Saturday 10 January 2026 14:52:55 +0000 (0:00:05.530) 0:01:51.047 ****** 2026-01-10 14:55:47.083954 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083960 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.083965 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.083971 | orchestrator | 2026-01-10 14:55:47.083976 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-10 14:55:47.083982 | orchestrator | Saturday 10 January 2026 14:52:59 +0000 (0:00:04.135) 0:01:55.183 ****** 2026-01-10 14:55:47.083987 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.083992 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.084016 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.084024 | orchestrator | 2026-01-10 14:55:47.084029 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-10 14:55:47.084033 | orchestrator | Saturday 10 January 2026 14:53:00 +0000 (0:00:00.761) 0:01:55.945 ****** 2026-01-10 14:55:47.084038 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:47.084042 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:47.084047 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084052 | orchestrator | 2026-01-10 14:55:47.084057 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-10 14:55:47.084061 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:02.018) 0:01:57.963 ****** 2026-01-10 14:55:47.084067 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.084076 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.084081 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.084086 | orchestrator | 2026-01-10 14:55:47.084091 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-10 14:55:47.084097 | orchestrator | Saturday 10 January 2026 14:53:03 +0000 (0:00:01.259) 0:01:59.222 ****** 2026-01-10 14:55:47.084100 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.084103 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.084106 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.084109 | orchestrator | 2026-01-10 14:55:47.084112 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-10 14:55:47.084115 | orchestrator | Saturday 10 January 2026 14:53:04 +0000 (0:00:01.106) 0:02:00.328 ****** 2026-01-10 14:55:47.084119 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.084122 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.084125 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.084128 | orchestrator | 2026-01-10 14:55:47.084135 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-10 14:55:47.084138 | orchestrator | Saturday 10 January 2026 14:53:07 +0000 (0:00:02.242) 0:02:02.571 ****** 2026-01-10 14:55:47.084141 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.084144 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.084147 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.084150 | orchestrator | 2026-01-10 14:55:47.084153 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-10 14:55:47.084156 | orchestrator | Saturday 10 January 2026 14:53:09 +0000 (0:00:01.813) 0:02:04.385 ****** 2026-01-10 14:55:47.084159 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084163 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:47.084166 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:47.084169 | orchestrator | 2026-01-10 14:55:47.084172 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-10 14:55:47.084175 | orchestrator | Saturday 10 January 2026 14:53:09 +0000 (0:00:00.771) 0:02:05.157 ****** 2026-01-10 14:55:47.084178 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:47.084181 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:47.084184 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084187 | orchestrator | 2026-01-10 14:55:47.084194 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.084197 | orchestrator | Saturday 10 January 2026 14:53:12 +0000 (0:00:03.170) 0:02:08.328 ****** 2026-01-10 14:55:47.084200 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:47.084203 | orchestrator | 2026-01-10 14:55:47.084206 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-10 14:55:47.084210 | orchestrator | Saturday 10 January 2026 14:53:14 +0000 (0:00:01.054) 0:02:09.382 ****** 2026-01-10 14:55:47.084213 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084216 | orchestrator | 2026-01-10 14:55:47.084219 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:55:47.084222 | orchestrator | Saturday 10 January 2026 14:53:18 +0000 (0:00:04.429) 0:02:13.812 ****** 2026-01-10 14:55:47.084225 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084228 | orchestrator | 2026-01-10 14:55:47.084231 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-10 14:55:47.084234 | orchestrator | Saturday 10 January 2026 14:53:21 +0000 (0:00:03.028) 0:02:16.840 ****** 2026-01-10 14:55:47.084237 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:55:47.084240 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:55:47.084243 | orchestrator | 2026-01-10 14:55:47.084246 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-10 14:55:47.084249 | orchestrator | Saturday 10 January 2026 14:53:28 +0000 (0:00:06.642) 0:02:23.483 ****** 2026-01-10 14:55:47.084252 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084255 | orchestrator | 2026-01-10 14:55:47.084258 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-10 14:55:47.084261 | orchestrator | Saturday 10 January 2026 14:53:31 +0000 (0:00:03.479) 0:02:26.963 ****** 2026-01-10 14:55:47.084264 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:47.084267 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:47.084271 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:47.084273 | orchestrator | 2026-01-10 14:55:47.084276 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-10 14:55:47.084280 | orchestrator | Saturday 10 January 2026 14:53:31 +0000 (0:00:00.402) 0:02:27.365 ****** 2026-01-10 14:55:47.084284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084371 | orchestrator | 2026-01-10 14:55:47.084377 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-10 14:55:47.084382 | orchestrator | Saturday 10 January 2026 14:53:34 +0000 (0:00:02.913) 0:02:30.279 ****** 2026-01-10 14:55:47.084387 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.084392 | orchestrator | 2026-01-10 14:55:47.084419 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-10 14:55:47.084427 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:00.139) 0:02:30.418 ****** 2026-01-10 14:55:47.084430 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.084433 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:47.084437 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:47.084440 | orchestrator | 2026-01-10 14:55:47.084443 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-10 14:55:47.084447 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:00.620) 0:02:31.039 ****** 2026-01-10 14:55:47.084450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084473 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.084480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084501 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:47.084507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084526 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:47.084529 | orchestrator | 2026-01-10 14:55:47.084532 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.084535 | orchestrator | Saturday 10 January 2026 14:53:36 +0000 (0:00:00.777) 0:02:31.816 ****** 2026-01-10 14:55:47.084538 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:47.084542 | orchestrator | 2026-01-10 14:55:47.084545 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-10 14:55:47.084548 | orchestrator | Saturday 10 January 2026 14:53:37 +0000 (0:00:00.668) 0:02:32.485 ****** 2026-01-10 14:55:47.084556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084664 | orchestrator | 2026-01-10 14:55:47.084669 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-10 14:55:47.084674 | orchestrator | Saturday 10 January 2026 14:53:43 +0000 (0:00:05.988) 0:02:38.474 ****** 2026-01-10 14:55:47.084679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084711 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.084720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084752 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:47.084759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084787 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:47.084790 | orchestrator | 2026-01-10 14:55:47.084793 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-10 14:55:47.084796 | orchestrator | Saturday 10 January 2026 14:53:44 +0000 (0:00:01.090) 0:02:39.564 ****** 2026-01-10 14:55:47.084799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084832 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.084838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084847 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:47.084851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:55:47.084857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:55:47.084860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:55:47.084868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:55:47.084871 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:47.084874 | orchestrator | 2026-01-10 14:55:47.084877 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-10 14:55:47.084881 | orchestrator | Saturday 10 January 2026 14:53:45 +0000 (0:00:01.309) 0:02:40.874 ****** 2026-01-10 14:55:47.084887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.084903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.084929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.084989 | orchestrator | 2026-01-10 14:55:47.084995 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-10 14:55:47.085000 | orchestrator | Saturday 10 January 2026 14:53:51 +0000 (0:00:05.591) 0:02:46.466 ****** 2026-01-10 14:55:47.085005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:55:47.085011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:55:47.085016 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:55:47.085021 | orchestrator | 2026-01-10 14:55:47.085026 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-10 14:55:47.085032 | orchestrator | Saturday 10 January 2026 14:53:53 +0000 (0:00:02.050) 0:02:48.516 ****** 2026-01-10 14:55:47.085037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085107 | orchestrator | 2026-01-10 14:55:47.085110 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-10 14:55:47.085114 | orchestrator | Saturday 10 January 2026 14:54:11 +0000 (0:00:18.435) 0:03:06.951 ****** 2026-01-10 14:55:47.085119 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085122 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085125 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085128 | orchestrator | 2026-01-10 14:55:47.085131 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-10 14:55:47.085135 | orchestrator | Saturday 10 January 2026 14:54:13 +0000 (0:00:01.499) 0:03:08.450 ****** 2026-01-10 14:55:47.085138 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085141 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085145 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085149 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085152 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085155 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085158 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085162 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085165 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085168 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085171 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085174 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085177 | orchestrator | 2026-01-10 14:55:47.085180 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-10 14:55:47.085183 | orchestrator | Saturday 10 January 2026 14:54:18 +0000 (0:00:05.144) 0:03:13.595 ****** 2026-01-10 14:55:47.085187 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085190 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085193 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085196 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085199 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085202 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085205 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085208 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085211 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085214 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085217 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085220 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085223 | orchestrator | 2026-01-10 14:55:47.085226 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-10 14:55:47.085229 | orchestrator | Saturday 10 January 2026 14:54:23 +0000 (0:00:05.359) 0:03:18.954 ****** 2026-01-10 14:55:47.085232 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085235 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085239 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:55:47.085242 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085245 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085248 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:55:47.085251 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085254 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085257 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:55:47.085263 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085266 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085269 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:55:47.085272 | orchestrator | 2026-01-10 14:55:47.085275 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-10 14:55:47.085278 | orchestrator | Saturday 10 January 2026 14:54:30 +0000 (0:00:06.473) 0:03:25.427 ****** 2026-01-10 14:55:47.085285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:55:47.085298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:55:47.085312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:55:47.085350 | orchestrator | 2026-01-10 14:55:47.085353 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:55:47.085356 | orchestrator | Saturday 10 January 2026 14:54:34 +0000 (0:00:04.107) 0:03:29.535 ****** 2026-01-10 14:55:47.085360 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:47.085363 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:47.085366 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:47.085369 | orchestrator | 2026-01-10 14:55:47.085372 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-10 14:55:47.085375 | orchestrator | Saturday 10 January 2026 14:54:34 +0000 (0:00:00.243) 0:03:29.778 ****** 2026-01-10 14:55:47.085378 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085381 | orchestrator | 2026-01-10 14:55:47.085384 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-10 14:55:47.085388 | orchestrator | Saturday 10 January 2026 14:54:36 +0000 (0:00:02.282) 0:03:32.061 ****** 2026-01-10 14:55:47.085391 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085394 | orchestrator | 2026-01-10 14:55:47.085397 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-10 14:55:47.085400 | orchestrator | Saturday 10 January 2026 14:54:39 +0000 (0:00:02.355) 0:03:34.416 ****** 2026-01-10 14:55:47.085403 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085406 | orchestrator | 2026-01-10 14:55:47.085424 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-10 14:55:47.085433 | orchestrator | Saturday 10 January 2026 14:54:42 +0000 (0:00:03.007) 0:03:37.424 ****** 2026-01-10 14:55:47.085436 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085439 | orchestrator | 2026-01-10 14:55:47.085442 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-10 14:55:47.085445 | orchestrator | Saturday 10 January 2026 14:54:45 +0000 (0:00:03.081) 0:03:40.506 ****** 2026-01-10 14:55:47.085449 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085452 | orchestrator | 2026-01-10 14:55:47.085455 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:55:47.085458 | orchestrator | Saturday 10 January 2026 14:55:05 +0000 (0:00:20.717) 0:04:01.223 ****** 2026-01-10 14:55:47.085461 | orchestrator | 2026-01-10 14:55:47.085464 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:55:47.085467 | orchestrator | Saturday 10 January 2026 14:55:05 +0000 (0:00:00.086) 0:04:01.310 ****** 2026-01-10 14:55:47.085470 | orchestrator | 2026-01-10 14:55:47.085473 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:55:47.085476 | orchestrator | Saturday 10 January 2026 14:55:06 +0000 (0:00:00.083) 0:04:01.393 ****** 2026-01-10 14:55:47.085480 | orchestrator | 2026-01-10 14:55:47.085483 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-10 14:55:47.085486 | orchestrator | Saturday 10 January 2026 14:55:06 +0000 (0:00:00.089) 0:04:01.482 ****** 2026-01-10 14:55:47.085489 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085492 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085495 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085498 | orchestrator | 2026-01-10 14:55:47.085501 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-10 14:55:47.085504 | orchestrator | Saturday 10 January 2026 14:55:21 +0000 (0:00:15.444) 0:04:16.927 ****** 2026-01-10 14:55:47.085507 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085510 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085513 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085516 | orchestrator | 2026-01-10 14:55:47.085519 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-10 14:55:47.085522 | orchestrator | Saturday 10 January 2026 14:55:28 +0000 (0:00:06.455) 0:04:23.383 ****** 2026-01-10 14:55:47.085525 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085528 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085531 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085534 | orchestrator | 2026-01-10 14:55:47.085537 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-10 14:55:47.085540 | orchestrator | Saturday 10 January 2026 14:55:33 +0000 (0:00:05.433) 0:04:28.817 ****** 2026-01-10 14:55:47.085544 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085547 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085550 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085553 | orchestrator | 2026-01-10 14:55:47.085556 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-10 14:55:47.085559 | orchestrator | Saturday 10 January 2026 14:55:38 +0000 (0:00:05.239) 0:04:34.057 ****** 2026-01-10 14:55:47.085562 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:47.085565 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:47.085568 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:47.085571 | orchestrator | 2026-01-10 14:55:47.085574 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:55:47.085580 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:47.085583 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:55:47.085586 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-10 14:55:47.085591 | orchestrator | 2026-01-10 14:55:47.085595 | orchestrator | 2026-01-10 14:55:47.085598 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:55:47.085601 | orchestrator | Saturday 10 January 2026 14:55:44 +0000 (0:00:05.661) 0:04:39.718 ****** 2026-01-10 14:55:47.085631 | orchestrator | =============================================================================== 2026-01-10 14:55:47.085635 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.72s 2026-01-10 14:55:47.085639 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.44s 2026-01-10 14:55:47.085642 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.19s 2026-01-10 14:55:47.085645 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.44s 2026-01-10 14:55:47.085648 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.97s 2026-01-10 14:55:47.085651 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.12s 2026-01-10 14:55:47.085654 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.48s 2026-01-10 14:55:47.085657 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.10s 2026-01-10 14:55:47.085660 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.64s 2026-01-10 14:55:47.085663 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.60s 2026-01-10 14:55:47.085666 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.47s 2026-01-10 14:55:47.085669 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.46s 2026-01-10 14:55:47.085672 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.99s 2026-01-10 14:55:47.085675 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.77s 2026-01-10 14:55:47.085678 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.66s 2026-01-10 14:55:47.085681 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.59s 2026-01-10 14:55:47.085684 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.53s 2026-01-10 14:55:47.085687 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.43s 2026-01-10 14:55:47.085690 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.36s 2026-01-10 14:55:47.085693 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.24s 2026-01-10 14:55:47.085697 | orchestrator | 2026-01-10 14:55:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:55:50.127905 | orchestrator | 2026-01-10 14:55:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:55:53.170845 | orchestrator | 2026-01-10 14:55:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:55:56.210781 | orchestrator | 2026-01-10 14:55:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:55:59.254736 | orchestrator | 2026-01-10 14:55:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:02.298538 | orchestrator | 2026-01-10 14:56:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:05.345819 | orchestrator | 2026-01-10 14:56:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:08.392854 | orchestrator | 2026-01-10 14:56:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:11.435616 | orchestrator | 2026-01-10 14:56:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:14.485307 | orchestrator | 2026-01-10 14:56:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:17.530531 | orchestrator | 2026-01-10 14:56:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:20.576407 | orchestrator | 2026-01-10 14:56:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:23.621968 | orchestrator | 2026-01-10 14:56:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:26.668076 | orchestrator | 2026-01-10 14:56:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:29.716199 | orchestrator | 2026-01-10 14:56:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:32.758871 | orchestrator | 2026-01-10 14:56:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:35.811835 | orchestrator | 2026-01-10 14:56:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:38.857282 | orchestrator | 2026-01-10 14:56:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:41.898928 | orchestrator | 2026-01-10 14:56:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:44.945246 | orchestrator | 2026-01-10 14:56:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:56:47.994391 | orchestrator | 2026-01-10 14:56:48.319954 | orchestrator | 2026-01-10 14:56:48.324012 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jan 10 14:56:48 UTC 2026 2026-01-10 14:56:48.324075 | orchestrator | 2026-01-10 14:56:48.712830 | orchestrator | ok: Runtime: 0:35:13.899862 2026-01-10 14:56:48.967585 | 2026-01-10 14:56:48.967741 | TASK [Bootstrap services] 2026-01-10 14:56:49.733509 | orchestrator | 2026-01-10 14:56:49.733613 | orchestrator | # BOOTSTRAP 2026-01-10 14:56:49.733626 | orchestrator | 2026-01-10 14:56:49.733635 | orchestrator | + set -e 2026-01-10 14:56:49.733644 | orchestrator | + echo 2026-01-10 14:56:49.733653 | orchestrator | + echo '# BOOTSTRAP' 2026-01-10 14:56:49.733663 | orchestrator | + echo 2026-01-10 14:56:49.733688 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-10 14:56:49.743834 | orchestrator | + set -e 2026-01-10 14:56:49.743899 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-10 14:56:54.664711 | orchestrator | 2026-01-10 14:56:54 | INFO  | It takes a moment until task e2fbb754-bf06-40fd-96a9-602aa540d9c6 (flavor-manager) has been started and output is visible here. 2026-01-10 14:57:01.974325 | orchestrator | 2026-01-10 14:56:57 | INFO  | Flavor SCS-1L-1 created 2026-01-10 14:57:01.974414 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1L-1-5 created 2026-01-10 14:57:01.974426 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1V-2 created 2026-01-10 14:57:01.974432 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1V-2-5 created 2026-01-10 14:57:01.974437 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1V-4 created 2026-01-10 14:57:01.974442 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1V-4-10 created 2026-01-10 14:57:01.974447 | orchestrator | 2026-01-10 14:56:58 | INFO  | Flavor SCS-1V-8 created 2026-01-10 14:57:01.974452 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-1V-8-20 created 2026-01-10 14:57:01.974462 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-4 created 2026-01-10 14:57:01.974466 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-4-10 created 2026-01-10 14:57:01.974470 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-8 created 2026-01-10 14:57:01.974474 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-8-20 created 2026-01-10 14:57:01.974478 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-16 created 2026-01-10 14:57:01.974482 | orchestrator | 2026-01-10 14:56:59 | INFO  | Flavor SCS-2V-16-50 created 2026-01-10 14:57:01.974486 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-8 created 2026-01-10 14:57:01.974490 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-8-20 created 2026-01-10 14:57:01.974494 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-16 created 2026-01-10 14:57:01.974498 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-16-50 created 2026-01-10 14:57:01.974501 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-32 created 2026-01-10 14:57:01.974505 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-4V-32-100 created 2026-01-10 14:57:01.974509 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-8V-16 created 2026-01-10 14:57:01.974513 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-8V-16-50 created 2026-01-10 14:57:01.974517 | orchestrator | 2026-01-10 14:57:00 | INFO  | Flavor SCS-8V-32 created 2026-01-10 14:57:01.974521 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-8V-32-100 created 2026-01-10 14:57:01.974525 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-16V-32 created 2026-01-10 14:57:01.974529 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-16V-32-100 created 2026-01-10 14:57:01.974533 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-2V-4-20s created 2026-01-10 14:57:01.974536 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-4V-8-50s created 2026-01-10 14:57:01.974540 | orchestrator | 2026-01-10 14:57:01 | INFO  | Flavor SCS-8V-32-100s created 2026-01-10 14:57:04.562662 | orchestrator | 2026-01-10 14:57:04 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-10 14:57:14.665971 | orchestrator | 2026-01-10 14:57:14 | INFO  | Task 282f095d-42c5-4f14-b54d-ef456c21fd9e (bootstrap-basic) was prepared for execution. 2026-01-10 14:57:14.666087 | orchestrator | 2026-01-10 14:57:14 | INFO  | It takes a moment until task 282f095d-42c5-4f14-b54d-ef456c21fd9e (bootstrap-basic) has been started and output is visible here. 2026-01-10 14:58:02.665047 | orchestrator | 2026-01-10 14:58:02.665135 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-10 14:58:02.665144 | orchestrator | 2026-01-10 14:58:02.665150 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:58:02.665155 | orchestrator | Saturday 10 January 2026 14:57:19 +0000 (0:00:00.088) 0:00:00.088 ****** 2026-01-10 14:58:02.665160 | orchestrator | ok: [localhost] 2026-01-10 14:58:02.665166 | orchestrator | 2026-01-10 14:58:02.665173 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-10 14:58:02.665181 | orchestrator | Saturday 10 January 2026 14:57:21 +0000 (0:00:01.958) 0:00:02.047 ****** 2026-01-10 14:58:02.665203 | orchestrator | ok: [localhost] 2026-01-10 14:58:02.665210 | orchestrator | 2026-01-10 14:58:02.665218 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-10 14:58:02.665224 | orchestrator | Saturday 10 January 2026 14:57:30 +0000 (0:00:09.535) 0:00:11.582 ****** 2026-01-10 14:58:02.665231 | orchestrator | changed: [localhost] 2026-01-10 14:58:02.665237 | orchestrator | 2026-01-10 14:58:02.665244 | orchestrator | TASK [Create public network] *************************************************** 2026-01-10 14:58:02.665251 | orchestrator | Saturday 10 January 2026 14:57:38 +0000 (0:00:07.755) 0:00:19.337 ****** 2026-01-10 14:58:02.665258 | orchestrator | changed: [localhost] 2026-01-10 14:58:02.665264 | orchestrator | 2026-01-10 14:58:02.665271 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-10 14:58:02.665277 | orchestrator | Saturday 10 January 2026 14:57:43 +0000 (0:00:05.081) 0:00:24.419 ****** 2026-01-10 14:58:02.665288 | orchestrator | changed: [localhost] 2026-01-10 14:58:02.665294 | orchestrator | 2026-01-10 14:58:02.665301 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-10 14:58:02.665307 | orchestrator | Saturday 10 January 2026 14:57:50 +0000 (0:00:06.933) 0:00:31.352 ****** 2026-01-10 14:58:02.665314 | orchestrator | changed: [localhost] 2026-01-10 14:58:02.665319 | orchestrator | 2026-01-10 14:58:02.665326 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-10 14:58:02.665333 | orchestrator | Saturday 10 January 2026 14:57:54 +0000 (0:00:04.261) 0:00:35.613 ****** 2026-01-10 14:58:02.665340 | orchestrator | changed: [localhost] 2026-01-10 14:58:02.665346 | orchestrator | 2026-01-10 14:58:02.665353 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-10 14:58:02.665370 | orchestrator | Saturday 10 January 2026 14:57:58 +0000 (0:00:03.749) 0:00:39.362 ****** 2026-01-10 14:58:02.665377 | orchestrator | ok: [localhost] 2026-01-10 14:58:02.665384 | orchestrator | 2026-01-10 14:58:02.665390 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:58:02.665398 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:58:02.665408 | orchestrator | 2026-01-10 14:58:02.665425 | orchestrator | 2026-01-10 14:58:02.665432 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:58:02.665439 | orchestrator | Saturday 10 January 2026 14:58:02 +0000 (0:00:03.737) 0:00:43.100 ****** 2026-01-10 14:58:02.665445 | orchestrator | =============================================================================== 2026-01-10 14:58:02.665452 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.54s 2026-01-10 14:58:02.665460 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.76s 2026-01-10 14:58:02.665466 | orchestrator | Set public network to default ------------------------------------------- 6.93s 2026-01-10 14:58:02.665473 | orchestrator | Create public network --------------------------------------------------- 5.08s 2026-01-10 14:58:02.665503 | orchestrator | Create public subnet ---------------------------------------------------- 4.26s 2026-01-10 14:58:02.665511 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.75s 2026-01-10 14:58:02.665519 | orchestrator | Create manager role ----------------------------------------------------- 3.74s 2026-01-10 14:58:02.665525 | orchestrator | Gathering Facts --------------------------------------------------------- 1.96s 2026-01-10 14:58:05.370554 | orchestrator | 2026-01-10 14:58:05 | INFO  | It takes a moment until task 74a5e4c6-be2f-470a-b810-5c5116ff23fc (image-manager) has been started and output is visible here. 2026-01-10 14:58:45.935203 | orchestrator | 2026-01-10 14:58:08 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-10 14:58:45.935329 | orchestrator | 2026-01-10 14:58:08 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-10 14:58:45.935349 | orchestrator | 2026-01-10 14:58:08 | INFO  | Importing image Cirros 0.6.2 2026-01-10 14:58:45.936066 | orchestrator | 2026-01-10 14:58:08 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 14:58:45.936106 | orchestrator | 2026-01-10 14:58:10 | INFO  | Waiting for image to leave queued state... 2026-01-10 14:58:45.936185 | orchestrator | 2026-01-10 14:58:12 | INFO  | Waiting for import to complete... 2026-01-10 14:58:45.936198 | orchestrator | 2026-01-10 14:58:23 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-10 14:58:45.936210 | orchestrator | 2026-01-10 14:58:23 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-10 14:58:45.936221 | orchestrator | 2026-01-10 14:58:23 | INFO  | Setting internal_version = 0.6.2 2026-01-10 14:58:45.936233 | orchestrator | 2026-01-10 14:58:23 | INFO  | Setting image_original_user = cirros 2026-01-10 14:58:45.936244 | orchestrator | 2026-01-10 14:58:23 | INFO  | Adding tag os:cirros 2026-01-10 14:58:45.936255 | orchestrator | 2026-01-10 14:58:23 | INFO  | Setting property architecture: x86_64 2026-01-10 14:58:45.936266 | orchestrator | 2026-01-10 14:58:23 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 14:58:45.936277 | orchestrator | 2026-01-10 14:58:24 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 14:58:45.936288 | orchestrator | 2026-01-10 14:58:24 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 14:58:45.936299 | orchestrator | 2026-01-10 14:58:24 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 14:58:45.936310 | orchestrator | 2026-01-10 14:58:24 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 14:58:45.936322 | orchestrator | 2026-01-10 14:58:25 | INFO  | Setting property os_distro: cirros 2026-01-10 14:58:45.936333 | orchestrator | 2026-01-10 14:58:25 | INFO  | Setting property os_purpose: minimal 2026-01-10 14:58:45.936344 | orchestrator | 2026-01-10 14:58:25 | INFO  | Setting property replace_frequency: never 2026-01-10 14:58:45.936355 | orchestrator | 2026-01-10 14:58:25 | INFO  | Setting property uuid_validity: none 2026-01-10 14:58:45.936365 | orchestrator | 2026-01-10 14:58:25 | INFO  | Setting property provided_until: none 2026-01-10 14:58:45.936382 | orchestrator | 2026-01-10 14:58:26 | INFO  | Setting property image_description: Cirros 2026-01-10 14:58:45.936408 | orchestrator | 2026-01-10 14:58:26 | INFO  | Setting property image_name: Cirros 2026-01-10 14:58:45.936432 | orchestrator | 2026-01-10 14:58:26 | INFO  | Setting property internal_version: 0.6.2 2026-01-10 14:58:45.936451 | orchestrator | 2026-01-10 14:58:26 | INFO  | Setting property image_original_user: cirros 2026-01-10 14:58:45.936497 | orchestrator | 2026-01-10 14:58:27 | INFO  | Setting property os_version: 0.6.2 2026-01-10 14:58:45.936528 | orchestrator | 2026-01-10 14:58:27 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 14:58:45.936548 | orchestrator | 2026-01-10 14:58:27 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-10 14:58:45.936566 | orchestrator | 2026-01-10 14:58:27 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-10 14:58:45.936584 | orchestrator | 2026-01-10 14:58:27 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-10 14:58:45.936602 | orchestrator | 2026-01-10 14:58:27 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-10 14:58:45.936622 | orchestrator | 2026-01-10 14:58:27 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-10 14:58:45.936647 | orchestrator | 2026-01-10 14:58:28 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-10 14:58:45.936667 | orchestrator | 2026-01-10 14:58:28 | INFO  | Importing image Cirros 0.6.3 2026-01-10 14:58:45.936686 | orchestrator | 2026-01-10 14:58:28 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 14:58:45.936705 | orchestrator | 2026-01-10 14:58:28 | INFO  | Waiting for image to leave queued state... 2026-01-10 14:58:45.936724 | orchestrator | 2026-01-10 14:58:30 | INFO  | Waiting for import to complete... 2026-01-10 14:58:45.936771 | orchestrator | 2026-01-10 14:58:40 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-10 14:58:45.936792 | orchestrator | 2026-01-10 14:58:41 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-10 14:58:45.936811 | orchestrator | 2026-01-10 14:58:41 | INFO  | Setting internal_version = 0.6.3 2026-01-10 14:58:45.936829 | orchestrator | 2026-01-10 14:58:41 | INFO  | Setting image_original_user = cirros 2026-01-10 14:58:45.936849 | orchestrator | 2026-01-10 14:58:41 | INFO  | Adding tag os:cirros 2026-01-10 14:58:45.936867 | orchestrator | 2026-01-10 14:58:41 | INFO  | Setting property architecture: x86_64 2026-01-10 14:58:45.936886 | orchestrator | 2026-01-10 14:58:41 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 14:58:45.936905 | orchestrator | 2026-01-10 14:58:42 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 14:58:45.936924 | orchestrator | 2026-01-10 14:58:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 14:58:45.936944 | orchestrator | 2026-01-10 14:58:42 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 14:58:45.936964 | orchestrator | 2026-01-10 14:58:42 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 14:58:45.936985 | orchestrator | 2026-01-10 14:58:42 | INFO  | Setting property os_distro: cirros 2026-01-10 14:58:45.937006 | orchestrator | 2026-01-10 14:58:43 | INFO  | Setting property os_purpose: minimal 2026-01-10 14:58:45.937026 | orchestrator | 2026-01-10 14:58:43 | INFO  | Setting property replace_frequency: never 2026-01-10 14:58:45.937046 | orchestrator | 2026-01-10 14:58:43 | INFO  | Setting property uuid_validity: none 2026-01-10 14:58:45.937066 | orchestrator | 2026-01-10 14:58:43 | INFO  | Setting property provided_until: none 2026-01-10 14:58:45.937085 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property image_description: Cirros 2026-01-10 14:58:45.937104 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property image_name: Cirros 2026-01-10 14:58:45.937148 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property internal_version: 0.6.3 2026-01-10 14:58:45.937184 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property image_original_user: cirros 2026-01-10 14:58:45.937203 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property os_version: 0.6.3 2026-01-10 14:58:45.937222 | orchestrator | 2026-01-10 14:58:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 14:58:45.937240 | orchestrator | 2026-01-10 14:58:45 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-10 14:58:45.937258 | orchestrator | 2026-01-10 14:58:45 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-10 14:58:45.937276 | orchestrator | 2026-01-10 14:58:45 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-10 14:58:45.937294 | orchestrator | 2026-01-10 14:58:45 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-10 14:58:46.460202 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-10 14:58:49.077353 | orchestrator | 2026-01-10 14:58:49 | INFO  | date: 2026-01-10 2026-01-10 14:58:49.077445 | orchestrator | 2026-01-10 14:58:49 | INFO  | image: octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 14:58:49.077584 | orchestrator | 2026-01-10 14:58:49 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 14:58:49.078239 | orchestrator | 2026-01-10 14:58:49 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2.CHECKSUM 2026-01-10 14:58:49.255287 | orchestrator | 2026-01-10 14:58:49 | INFO  | checksum: ae42c33b510a5d6430e8d5e850fcf0e0166b59a495061a775b2e6eb290d4c686 2026-01-10 14:58:49.341798 | orchestrator | 2026-01-10 14:58:49 | INFO  | It takes a moment until task 2b6542a4-cf7f-4b7a-a05e-60ce89544bc8 (image-manager) has been started and output is visible here. 2026-01-10 14:59:52.345319 | orchestrator | 2026-01-10 14:58:51 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 14:59:52.345429 | orchestrator | 2026-01-10 14:58:51 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2: 200 2026-01-10 14:59:52.345447 | orchestrator | 2026-01-10 14:58:51 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-10 2026-01-10 14:59:52.345459 | orchestrator | 2026-01-10 14:58:51 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 14:59:52.345472 | orchestrator | 2026-01-10 14:58:54 | INFO  | Waiting for image to leave queued state... 2026-01-10 14:59:52.345483 | orchestrator | 2026-01-10 14:58:56 | INFO  | Waiting for import to complete... 2026-01-10 14:59:52.345495 | orchestrator | 2026-01-10 14:59:06 | INFO  | Waiting for import to complete... 2026-01-10 14:59:52.345506 | orchestrator | 2026-01-10 14:59:16 | INFO  | Waiting for import to complete... 2026-01-10 14:59:52.345517 | orchestrator | 2026-01-10 14:59:26 | INFO  | Waiting for import to complete... 2026-01-10 14:59:52.345530 | orchestrator | 2026-01-10 14:59:36 | INFO  | Waiting for import to complete... 2026-01-10 14:59:52.345543 | orchestrator | 2026-01-10 14:59:46 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-10' successfully completed, reloading images 2026-01-10 14:59:52.345556 | orchestrator | 2026-01-10 14:59:46 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 14:59:52.345567 | orchestrator | 2026-01-10 14:59:46 | INFO  | Setting internal_version = 2026-01-10 2026-01-10 14:59:52.345603 | orchestrator | 2026-01-10 14:59:46 | INFO  | Setting image_original_user = ubuntu 2026-01-10 14:59:52.345615 | orchestrator | 2026-01-10 14:59:46 | INFO  | Adding tag amphora 2026-01-10 14:59:52.345626 | orchestrator | 2026-01-10 14:59:47 | INFO  | Adding tag os:ubuntu 2026-01-10 14:59:52.345637 | orchestrator | 2026-01-10 14:59:47 | INFO  | Setting property architecture: x86_64 2026-01-10 14:59:52.345648 | orchestrator | 2026-01-10 14:59:47 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 14:59:52.345658 | orchestrator | 2026-01-10 14:59:48 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 14:59:52.345670 | orchestrator | 2026-01-10 14:59:48 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 14:59:52.345681 | orchestrator | 2026-01-10 14:59:48 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 14:59:52.345692 | orchestrator | 2026-01-10 14:59:48 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 14:59:52.345703 | orchestrator | 2026-01-10 14:59:49 | INFO  | Setting property os_distro: ubuntu 2026-01-10 14:59:52.345728 | orchestrator | 2026-01-10 14:59:49 | INFO  | Setting property replace_frequency: quarterly 2026-01-10 14:59:52.345739 | orchestrator | 2026-01-10 14:59:49 | INFO  | Setting property uuid_validity: last-1 2026-01-10 14:59:52.345750 | orchestrator | 2026-01-10 14:59:49 | INFO  | Setting property provided_until: none 2026-01-10 14:59:52.345761 | orchestrator | 2026-01-10 14:59:50 | INFO  | Setting property os_purpose: network 2026-01-10 14:59:52.345772 | orchestrator | 2026-01-10 14:59:50 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-10 14:59:52.345797 | orchestrator | 2026-01-10 14:59:50 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-10 14:59:52.345809 | orchestrator | 2026-01-10 14:59:50 | INFO  | Setting property internal_version: 2026-01-10 2026-01-10 14:59:52.345820 | orchestrator | 2026-01-10 14:59:51 | INFO  | Setting property image_original_user: ubuntu 2026-01-10 14:59:52.345830 | orchestrator | 2026-01-10 14:59:51 | INFO  | Setting property os_version: 2026-01-10 2026-01-10 14:59:52.345843 | orchestrator | 2026-01-10 14:59:51 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260110.qcow2 2026-01-10 14:59:52.345856 | orchestrator | 2026-01-10 14:59:51 | INFO  | Setting property image_build_date: 2026-01-10 2026-01-10 14:59:52.345868 | orchestrator | 2026-01-10 14:59:51 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 14:59:52.345880 | orchestrator | 2026-01-10 14:59:51 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 14:59:52.345893 | orchestrator | 2026-01-10 14:59:52 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-10 14:59:52.345923 | orchestrator | 2026-01-10 14:59:52 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-10 14:59:52.345937 | orchestrator | 2026-01-10 14:59:52 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-10 14:59:52.345949 | orchestrator | 2026-01-10 14:59:52 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-10 14:59:53.175520 | orchestrator | ok: Runtime: 0:03:03.460455 2026-01-10 14:59:53.205570 | 2026-01-10 14:59:53.205787 | TASK [Run checks] 2026-01-10 14:59:53.922592 | orchestrator | + set -e 2026-01-10 14:59:53.922818 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:59:53.922853 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:59:53.922885 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:59:53.922904 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:59:53.922916 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:59:53.922929 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 14:59:53.923357 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 14:59:53.929574 | orchestrator | 2026-01-10 14:59:53.929698 | orchestrator | # CHECK 2026-01-10 14:59:53.929714 | orchestrator | 2026-01-10 14:59:53.929727 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 14:59:53.929745 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 14:59:53.929757 | orchestrator | + echo 2026-01-10 14:59:53.929769 | orchestrator | + echo '# CHECK' 2026-01-10 14:59:53.929784 | orchestrator | + echo 2026-01-10 14:59:53.929857 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 14:59:53.930078 | orchestrator | ++ semver latest 5.0.0 2026-01-10 14:59:53.995718 | orchestrator | 2026-01-10 14:59:53.995819 | orchestrator | ## Containers @ testbed-manager 2026-01-10 14:59:53.995833 | orchestrator | 2026-01-10 14:59:53.995847 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 14:59:53.995857 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:59:53.995868 | orchestrator | + echo 2026-01-10 14:59:53.995879 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-10 14:59:53.995889 | orchestrator | + echo 2026-01-10 14:59:53.995899 | orchestrator | + osism container testbed-manager ps 2026-01-10 14:59:56.181769 | orchestrator | 2026-01-10 14:59:56 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-10 14:59:56.621112 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 14:59:56.621226 | orchestrator | e244d64a4958 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2026-01-10 14:59:56.621250 | orchestrator | 983af05e2032 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2026-01-10 14:59:56.621264 | orchestrator | 809cc312de2a registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 14:59:56.621284 | orchestrator | 8a8299db9782 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-01-10 14:59:56.621303 | orchestrator | 11d7303d56af registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2026-01-10 14:59:56.621317 | orchestrator | 6c6bd0d01a70 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2026-01-10 14:59:56.621329 | orchestrator | 30e45e1715e6 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 14:59:56.621341 | orchestrator | eb735e616a2b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 14:59:56.621383 | orchestrator | 322d9dc76936 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-01-10 14:59:56.621398 | orchestrator | 4a66b6f8b887 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 14:59:56.621410 | orchestrator | 0cd7f6631308 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-01-10 14:59:56.621422 | orchestrator | 0989dc4fd0fb registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-01-10 14:59:56.621434 | orchestrator | bc585a7d42a9 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-10 14:59:56.621446 | orchestrator | 357397624b0f registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2026-01-10 14:59:56.621526 | orchestrator | effa86528743 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-ansible 2026-01-10 14:59:56.621568 | orchestrator | bec7e4b5d9e7 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) kolla-ansible 2026-01-10 14:59:56.621581 | orchestrator | e27e5af2a9b3 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) ceph-ansible 2026-01-10 14:59:56.621592 | orchestrator | 83f53d60b0ef registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-kubernetes 2026-01-10 14:59:56.621603 | orchestrator | d0c62310ec9d registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-10 14:59:56.621615 | orchestrator | b5f53bf4e9f0 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-10 14:59:56.621625 | orchestrator | bee9d2a0a3b1 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 39 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-10 14:59:56.621636 | orchestrator | 4577aa5a80d4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-openstack-1 2026-01-10 14:59:56.621648 | orchestrator | 8eeee7da2bb5 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2026-01-10 14:59:56.621671 | orchestrator | 429f5000cc14 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-flower-1 2026-01-10 14:59:56.621684 | orchestrator | bf20b955aad0 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 39 minutes (healthy) osismclient 2026-01-10 14:59:56.621696 | orchestrator | ebdb43ea8e3c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-beat-1 2026-01-10 14:59:56.621707 | orchestrator | d3fc785e26a6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-listener-1 2026-01-10 14:59:56.621719 | orchestrator | f49eeed6084d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-10 14:59:56.621730 | orchestrator | d34305244cf6 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-10 14:59:57.061926 | orchestrator | 2026-01-10 14:59:57.062067 | orchestrator | ## Images @ testbed-manager 2026-01-10 14:59:57.062079 | orchestrator | 2026-01-10 14:59:57.062084 | orchestrator | + echo 2026-01-10 14:59:57.062090 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-10 14:59:57.062095 | orchestrator | + echo 2026-01-10 14:59:57.062102 | orchestrator | + osism container testbed-manager images 2026-01-10 14:59:59.536381 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 14:59:59.536508 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 bed886ed5921 11 hours ago 238MB 2026-01-10 14:59:59.536521 | orchestrator | registry.osism.tech/osism/cephclient reef b441644e2eee 12 hours ago 453MB 2026-01-10 14:59:59.536529 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7095452e5332 13 hours ago 271MB 2026-01-10 14:59:59.536535 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f69b470de803 13 hours ago 585MB 2026-01-10 14:59:59.536543 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 c4dc895df892 13 hours ago 675MB 2026-01-10 14:59:59.536550 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 d00ea12831a7 13 hours ago 313MB 2026-01-10 14:59:59.536556 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 525ea4ed7cc0 13 hours ago 844MB 2026-01-10 14:59:59.536562 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 47802b9af8e3 13 hours ago 311MB 2026-01-10 14:59:59.536569 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 189aa2300c26 13 hours ago 363MB 2026-01-10 14:59:59.536576 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 fc785fb1249a 13 hours ago 409MB 2026-01-10 14:59:59.536582 | orchestrator | registry.osism.tech/osism/osism-ansible latest b59653aa95c7 15 hours ago 611MB 2026-01-10 14:59:59.536588 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 773d8b3ff6ac 15 hours ago 560MB 2026-01-10 14:59:59.536595 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest f30a21f2039f 15 hours ago 1.23GB 2026-01-10 14:59:59.536623 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 648afa1cc5e7 15 hours ago 607MB 2026-01-10 14:59:59.536631 | orchestrator | registry.osism.tech/osism/osism latest 8c29e414bab3 15 hours ago 384MB 2026-01-10 14:59:59.536637 | orchestrator | registry.osism.tech/osism/osism-frontend latest 5e85434cda64 15 hours ago 239MB 2026-01-10 14:59:59.536644 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest c82dc72740b7 15 hours ago 335MB 2026-01-10 14:59:59.536650 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 5 weeks ago 11.5MB 2026-01-10 14:59:59.536657 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 8 weeks ago 334MB 2026-01-10 14:59:59.536663 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-10 14:59:59.536669 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-10 14:59:59.536676 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-10 14:59:59.536682 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-10 14:59:59.536688 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-10 14:59:59.941902 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 14:59:59.942708 | orchestrator | ++ semver latest 5.0.0 2026-01-10 14:59:59.991769 | orchestrator | 2026-01-10 14:59:59.991849 | orchestrator | ## Containers @ testbed-node-0 2026-01-10 14:59:59.991856 | orchestrator | 2026-01-10 14:59:59.991861 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 14:59:59.991865 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:59:59.991870 | orchestrator | + echo 2026-01-10 14:59:59.991874 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-10 14:59:59.991879 | orchestrator | + echo 2026-01-10 14:59:59.991883 | orchestrator | + osism container testbed-node-0 ps 2026-01-10 15:00:02.571262 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:00:02.571351 | orchestrator | f9658b586715 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:00:02.571373 | orchestrator | f723a5ee184c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:00:02.571380 | orchestrator | 483fc3081a8d registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:00:02.571408 | orchestrator | d4a7551851c5 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:00:02.571415 | orchestrator | 0bcb076469f5 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-01-10 15:00:02.571422 | orchestrator | 7372ae6d117a registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:00:02.571431 | orchestrator | 38ec0589a352 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-10 15:00:02.571438 | orchestrator | f57d7c095a8d registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:00:02.571447 | orchestrator | 7cdfeb38c553 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:00:02.571473 | orchestrator | f31fbaf1a084 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-01-10 15:00:02.571479 | orchestrator | d6384ee5fda0 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-10 15:00:02.571485 | orchestrator | 7b7a0bd290ab registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:00:02.571490 | orchestrator | 73d214b05dc5 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:00:02.571497 | orchestrator | ef6830b15a06 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:00:02.571502 | orchestrator | cd3acdbedb96 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:00:02.571508 | orchestrator | c8017907dc59 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-01-10 15:00:02.571514 | orchestrator | 07dae37387ad registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-10 15:00:02.571520 | orchestrator | 31dacf01ce95 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-10 15:00:02.571526 | orchestrator | d3c4c79c7a5b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-01-10 15:00:02.571532 | orchestrator | 9867c20608c4 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-01-10 15:00:02.571538 | orchestrator | 6e686b55dc94 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-10 15:00:02.571559 | orchestrator | 7853b7c85a21 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:00:02.571569 | orchestrator | 8b3fe64a355b registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:00:02.571575 | orchestrator | 89db513a2a1e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) designate_worker 2026-01-10 15:00:02.571580 | orchestrator | 0280686fe1c7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:00:02.571589 | orchestrator | 33cdbaf2cc90 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:00:02.571595 | orchestrator | 3a2b5703e4ae registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:00:02.571600 | orchestrator | 94b1905a04da registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-01-10 15:00:02.571612 | orchestrator | 9fa3c91fcf7a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:00:02.571617 | orchestrator | e625d6595368 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) barbican_worker 2026-01-10 15:00:02.571623 | orchestrator | e432346822d0 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-10 15:00:02.571629 | orchestrator | bce5b4d9cf19 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-10 15:00:02.571635 | orchestrator | 08aac1fbc61c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:00:02.571641 | orchestrator | 3a8743166f02 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:00:02.571647 | orchestrator | 22187e8179dd registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:00:02.571653 | orchestrator | be041c634886 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:00:02.571659 | orchestrator | 064500b4b32a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:00:02.571665 | orchestrator | 928471f8f34a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-01-10 15:00:02.571671 | orchestrator | b9cfbd15a4b3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-10 15:00:02.571677 | orchestrator | 9a687319b15a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-10 15:00:02.571684 | orchestrator | 5571671d7148 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2026-01-10 15:00:02.571688 | orchestrator | 2d55c361ef00 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-01-10 15:00:02.571692 | orchestrator | 926489661a73 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-10 15:00:02.571696 | orchestrator | f1cc5ef2b5f1 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:00:02.571704 | orchestrator | 9af5d713785d registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:00:02.571708 | orchestrator | 7eedc2454acb registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2026-01-10 15:00:02.571716 | orchestrator | dfcee41e12e8 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2026-01-10 15:00:02.571720 | orchestrator | e0b2dbf705f9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-01-10 15:00:02.571728 | orchestrator | f716979d2299 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-01-10 15:00:02.571731 | orchestrator | b2baafab6c4e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-01-10 15:00:02.571735 | orchestrator | 3f2ae5700b6e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-01-10 15:00:02.571739 | orchestrator | b8684606bdf3 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:00:02.571743 | orchestrator | 4ddc9c11a280 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:00:02.571747 | orchestrator | 197b10e28142 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:00:02.571751 | orchestrator | a2c1a58f7fe4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:00:02.571754 | orchestrator | 2b3a711229a6 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:00:02.571758 | orchestrator | 1eecc1a7a7bd registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:00:02.571762 | orchestrator | c60dd0b9d3ca registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:00:02.571766 | orchestrator | 42dfefd1006a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-10 15:00:03.002388 | orchestrator | 2026-01-10 15:00:03.002490 | orchestrator | ## Images @ testbed-node-0 2026-01-10 15:00:03.002501 | orchestrator | 2026-01-10 15:00:03.002509 | orchestrator | + echo 2026-01-10 15:00:03.002517 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-10 15:00:03.002525 | orchestrator | + echo 2026-01-10 15:00:03.002530 | orchestrator | + osism container testbed-node-0 images 2026-01-10 15:00:05.502833 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:00:05.502927 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:00:05.502939 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c6e6d76ef14c 13 hours ago 272MB 2026-01-10 15:00:05.502947 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 8c44feb2d7a6 13 hours ago 1.02GB 2026-01-10 15:00:05.502954 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fe54d9472228 13 hours ago 1.56GB 2026-01-10 15:00:05.502961 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e8119f1424cc 13 hours ago 1.53GB 2026-01-10 15:00:05.502968 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7095452e5332 13 hours ago 271MB 2026-01-10 15:00:05.502975 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d4efb5144edd 13 hours ago 417MB 2026-01-10 15:00:05.502982 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f69b470de803 13 hours ago 585MB 2026-01-10 15:00:05.502989 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 738a7fd75c3a 13 hours ago 282MB 2026-01-10 15:00:05.502996 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 13ae22f9119b 13 hours ago 279MB 2026-01-10 15:00:05.503077 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d045f87980bf 13 hours ago 328MB 2026-01-10 15:00:05.503088 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 c4dc895df892 13 hours ago 675MB 2026-01-10 15:00:05.503094 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0656d122f10d 13 hours ago 278MB 2026-01-10 15:00:05.503101 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b8cddc89bea5 13 hours ago 278MB 2026-01-10 15:00:05.503108 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 6726b3688e46 13 hours ago 458MB 2026-01-10 15:00:05.503114 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5af3a8262f9d 13 hours ago 1.16GB 2026-01-10 15:00:05.503120 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9940050049be 13 hours ago 284MB 2026-01-10 15:00:05.503125 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 bd7e66d37dcb 13 hours ago 284MB 2026-01-10 15:00:05.503131 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a92e2c2998a1 13 hours ago 306MB 2026-01-10 15:00:05.503137 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b2f0f61513e9 13 hours ago 297MB 2026-01-10 15:00:05.503143 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f10a21348073 13 hours ago 304MB 2026-01-10 15:00:05.503150 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 47802b9af8e3 13 hours ago 311MB 2026-01-10 15:00:05.503156 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 189aa2300c26 13 hours ago 363MB 2026-01-10 15:00:05.503163 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 6c4560ea419e 13 hours ago 995MB 2026-01-10 15:00:05.503170 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ec06b7c5f1af 13 hours ago 1.05GB 2026-01-10 15:00:05.503177 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 703f990809de 13 hours ago 1.06GB 2026-01-10 15:00:05.503184 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7377b333eb41 13 hours ago 1.03GB 2026-01-10 15:00:05.503190 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 40ef6a23eef3 13 hours ago 1.03GB 2026-01-10 15:00:05.503210 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 cd90406dbcbf 13 hours ago 1.06GB 2026-01-10 15:00:05.503218 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 2cedaca7de0f 13 hours ago 1.03GB 2026-01-10 15:00:05.503224 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7256fa6044bc 13 hours ago 1.22GB 2026-01-10 15:00:05.503230 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2c05f68e2ac3 13 hours ago 1.22GB 2026-01-10 15:00:05.503237 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 b4e2c918cceb 13 hours ago 1.37GB 2026-01-10 15:00:05.503244 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 77191a405166 13 hours ago 1.22GB 2026-01-10 15:00:05.503250 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 14e50a72c144 13 hours ago 1.1GB 2026-01-10 15:00:05.503257 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 4d29da55c510 13 hours ago 1.72GB 2026-01-10 15:00:05.503285 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 39030b3ffb1e 13 hours ago 1.41GB 2026-01-10 15:00:05.503293 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 3fbbe03b2464 13 hours ago 1.41GB 2026-01-10 15:00:05.503300 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 9cdebff3218e 13 hours ago 1.42GB 2026-01-10 15:00:05.503315 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 03f28db2c09d 13 hours ago 980MB 2026-01-10 15:00:05.503321 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 af8abe3a4bb1 13 hours ago 980MB 2026-01-10 15:00:05.503327 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 5898e879d70d 13 hours ago 980MB 2026-01-10 15:00:05.503333 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 c9691ef3ace8 13 hours ago 979MB 2026-01-10 15:00:05.503339 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 ff2a97ab03b1 13 hours ago 981MB 2026-01-10 15:00:05.503345 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 1c2d4d8c0e44 13 hours ago 981MB 2026-01-10 15:00:05.503350 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 df55ff90cd04 13 hours ago 982MB 2026-01-10 15:00:05.503356 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 eb9f23d976a8 13 hours ago 1.25GB 2026-01-10 15:00:05.503362 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 c7206841ed53 13 hours ago 1.13GB 2026-01-10 15:00:05.503368 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d35919d11266 13 hours ago 1.17GB 2026-01-10 15:00:05.503374 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 eb35cf201ba2 13 hours ago 989MB 2026-01-10 15:00:05.503381 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 091aa5c747a0 13 hours ago 990MB 2026-01-10 15:00:05.503388 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d9179f8dd840 13 hours ago 994MB 2026-01-10 15:00:05.503394 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 214353d5e35e 13 hours ago 990MB 2026-01-10 15:00:05.503401 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 036b811dc775 13 hours ago 990MB 2026-01-10 15:00:05.503408 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 eb0c61e2a22c 13 hours ago 994MB 2026-01-10 15:00:05.503415 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9ef5e978081e 13 hours ago 997MB 2026-01-10 15:00:05.503421 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 966b1cebb92e 13 hours ago 996MB 2026-01-10 15:00:05.503428 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 5f533c242846 13 hours ago 997MB 2026-01-10 15:00:05.503440 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4cf1f2ebfaf4 13 hours ago 1.04GB 2026-01-10 15:00:05.503447 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a035d12eae22 13 hours ago 1.05GB 2026-01-10 15:00:05.503455 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 adaf0a51859f 13 hours ago 1.09GB 2026-01-10 15:00:05.503463 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a82ec71fa2e5 13 hours ago 846MB 2026-01-10 15:00:05.503471 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 89e445636e48 13 hours ago 846MB 2026-01-10 15:00:05.503479 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 8d9ab516e3e2 13 hours ago 846MB 2026-01-10 15:00:05.503487 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 4ddd4a3a84e6 13 hours ago 846MB 2026-01-10 15:00:05.846120 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:00:05.846671 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:00:05.914350 | orchestrator | 2026-01-10 15:00:05.914428 | orchestrator | ## Containers @ testbed-node-1 2026-01-10 15:00:05.914440 | orchestrator | 2026-01-10 15:00:05.914447 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:00:05.914454 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:00:05.914461 | orchestrator | + echo 2026-01-10 15:00:05.914469 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-10 15:00:05.914476 | orchestrator | + echo 2026-01-10 15:00:05.914506 | orchestrator | + osism container testbed-node-1 ps 2026-01-10 15:00:08.489375 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:00:08.489449 | orchestrator | 2ac64b79ee49 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:00:08.489458 | orchestrator | d17a7d287c2a registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:00:08.489463 | orchestrator | 0c1fd8dacba7 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:00:08.489467 | orchestrator | df309a97fd79 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:00:08.489471 | orchestrator | e56903da0317 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-01-10 15:00:08.489485 | orchestrator | c5a3ada3ac01 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:00:08.489494 | orchestrator | 28b67c6bb113 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-10 15:00:08.489499 | orchestrator | 1699f40b7bb4 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-01-10 15:00:08.489506 | orchestrator | de07ef76367c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:00:08.489511 | orchestrator | 41110d571152 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:00:08.489514 | orchestrator | 96847929d09e registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-10 15:00:08.489518 | orchestrator | c3607e90ee1b registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:00:08.489522 | orchestrator | c5a0bf0a0d51 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:00:08.489526 | orchestrator | 5e25dd0d03c4 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:00:08.489544 | orchestrator | 1390a924b991 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:00:08.489549 | orchestrator | cac86f9da26b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-01-10 15:00:08.489553 | orchestrator | 7cf0aef471d8 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-10 15:00:08.489557 | orchestrator | 2879cacede86 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-10 15:00:08.489561 | orchestrator | 9f0c4fcaa522 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-01-10 15:00:08.489579 | orchestrator | c2127cfe83d7 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-01-10 15:00:08.489583 | orchestrator | 4086d64d069b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-10 15:00:08.489599 | orchestrator | 1794fac7f6bf registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:00:08.489603 | orchestrator | 69f83da60984 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:00:08.489607 | orchestrator | 2fbf9bf693b0 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-10 15:00:08.489611 | orchestrator | 4ffac074a85c registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:00:08.489614 | orchestrator | caf4b965fbe8 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:00:08.489618 | orchestrator | 73e52fa881c0 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:00:08.489622 | orchestrator | 2cb526404841 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-01-10 15:00:08.489626 | orchestrator | 67cee56cdc68 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:00:08.489629 | orchestrator | 95d91fe1f73d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-10 15:00:08.489633 | orchestrator | 508f14c5fb7f registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-10 15:00:08.489637 | orchestrator | f46187dcdf31 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-10 15:00:08.489641 | orchestrator | 1c05f13096c6 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:00:08.489645 | orchestrator | 2c54302ca7f0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:00:08.489648 | orchestrator | e89b945003f6 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:00:08.489652 | orchestrator | e24ff33715fd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-10 15:00:08.489656 | orchestrator | eb0b8c0b219d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:00:08.489664 | orchestrator | 5a17f723a4c7 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:00:08.489672 | orchestrator | 52357e021279 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-10 15:00:08.489675 | orchestrator | 0bb88b7a0574 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-10 15:00:08.489679 | orchestrator | 8566f206b9f9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-10 15:00:08.489683 | orchestrator | 5cde9ba0ebd3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2026-01-10 15:00:08.489687 | orchestrator | 1cbae108920f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-10 15:00:08.489691 | orchestrator | 4cd915b7add6 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:00:08.489698 | orchestrator | 987c808f2e01 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:00:08.489702 | orchestrator | 790dd3a0967f registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-01-10 15:00:08.489705 | orchestrator | 6ec9739602a8 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:00:08.489709 | orchestrator | f7530dae53d2 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:00:08.489713 | orchestrator | 55e9cfd78d87 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-01-10 15:00:08.489717 | orchestrator | 259652262be5 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:00:08.489721 | orchestrator | b928fb9e337f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-10 15:00:08.489725 | orchestrator | 7c5d0ca5d759 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:00:08.489728 | orchestrator | e34a9ba5ff37 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:00:08.489732 | orchestrator | 7e073048d92e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:00:08.489736 | orchestrator | 392d47a23b1b registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:00:08.489740 | orchestrator | 2c0a70f3bc5b registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:00:08.489744 | orchestrator | 591fad26c705 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-10 15:00:08.489748 | orchestrator | 24a09eb6142d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:00:08.489755 | orchestrator | ed4325f1f879 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:00:08.923183 | orchestrator | 2026-01-10 15:00:08.923257 | orchestrator | ## Images @ testbed-node-1 2026-01-10 15:00:08.923264 | orchestrator | 2026-01-10 15:00:08.923270 | orchestrator | + echo 2026-01-10 15:00:08.923275 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-10 15:00:08.923281 | orchestrator | + echo 2026-01-10 15:00:08.923287 | orchestrator | + osism container testbed-node-1 images 2026-01-10 15:00:11.517629 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:00:11.517725 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:00:11.517734 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c6e6d76ef14c 13 hours ago 272MB 2026-01-10 15:00:11.517741 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 8c44feb2d7a6 13 hours ago 1.02GB 2026-01-10 15:00:11.517748 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fe54d9472228 13 hours ago 1.56GB 2026-01-10 15:00:11.517756 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e8119f1424cc 13 hours ago 1.53GB 2026-01-10 15:00:11.517763 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7095452e5332 13 hours ago 271MB 2026-01-10 15:00:11.517785 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f69b470de803 13 hours ago 585MB 2026-01-10 15:00:11.517791 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d4efb5144edd 13 hours ago 417MB 2026-01-10 15:00:11.517797 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 738a7fd75c3a 13 hours ago 282MB 2026-01-10 15:00:11.517803 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 13ae22f9119b 13 hours ago 279MB 2026-01-10 15:00:11.517810 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d045f87980bf 13 hours ago 328MB 2026-01-10 15:00:11.517819 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 c4dc895df892 13 hours ago 675MB 2026-01-10 15:00:11.517825 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0656d122f10d 13 hours ago 278MB 2026-01-10 15:00:11.517831 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b8cddc89bea5 13 hours ago 278MB 2026-01-10 15:00:11.517837 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 6726b3688e46 13 hours ago 458MB 2026-01-10 15:00:11.517843 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5af3a8262f9d 13 hours ago 1.16GB 2026-01-10 15:00:11.517848 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9940050049be 13 hours ago 284MB 2026-01-10 15:00:11.517854 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 bd7e66d37dcb 13 hours ago 284MB 2026-01-10 15:00:11.517860 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a92e2c2998a1 13 hours ago 306MB 2026-01-10 15:00:11.517866 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b2f0f61513e9 13 hours ago 297MB 2026-01-10 15:00:11.517873 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f10a21348073 13 hours ago 304MB 2026-01-10 15:00:11.517880 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 47802b9af8e3 13 hours ago 311MB 2026-01-10 15:00:11.517885 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 189aa2300c26 13 hours ago 363MB 2026-01-10 15:00:11.517891 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 703f990809de 13 hours ago 1.06GB 2026-01-10 15:00:11.517897 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7377b333eb41 13 hours ago 1.03GB 2026-01-10 15:00:11.517925 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 40ef6a23eef3 13 hours ago 1.03GB 2026-01-10 15:00:11.517931 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 cd90406dbcbf 13 hours ago 1.06GB 2026-01-10 15:00:11.517937 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 2cedaca7de0f 13 hours ago 1.03GB 2026-01-10 15:00:11.517943 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7256fa6044bc 13 hours ago 1.22GB 2026-01-10 15:00:11.517950 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2c05f68e2ac3 13 hours ago 1.22GB 2026-01-10 15:00:11.517956 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 b4e2c918cceb 13 hours ago 1.37GB 2026-01-10 15:00:11.517962 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 77191a405166 13 hours ago 1.22GB 2026-01-10 15:00:11.517969 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 14e50a72c144 13 hours ago 1.1GB 2026-01-10 15:00:11.517975 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 4d29da55c510 13 hours ago 1.72GB 2026-01-10 15:00:11.517981 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 39030b3ffb1e 13 hours ago 1.41GB 2026-01-10 15:00:11.517987 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 3fbbe03b2464 13 hours ago 1.41GB 2026-01-10 15:00:11.518008 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 9cdebff3218e 13 hours ago 1.42GB 2026-01-10 15:00:11.518098 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 ff2a97ab03b1 13 hours ago 981MB 2026-01-10 15:00:11.518105 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 eb9f23d976a8 13 hours ago 1.25GB 2026-01-10 15:00:11.518110 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 c7206841ed53 13 hours ago 1.13GB 2026-01-10 15:00:11.518117 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d35919d11266 13 hours ago 1.17GB 2026-01-10 15:00:11.518123 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 eb35cf201ba2 13 hours ago 989MB 2026-01-10 15:00:11.518129 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 091aa5c747a0 13 hours ago 990MB 2026-01-10 15:00:11.518135 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d9179f8dd840 13 hours ago 994MB 2026-01-10 15:00:11.518141 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 214353d5e35e 13 hours ago 990MB 2026-01-10 15:00:11.518146 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 036b811dc775 13 hours ago 990MB 2026-01-10 15:00:11.518152 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 eb0c61e2a22c 13 hours ago 994MB 2026-01-10 15:00:11.518158 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9ef5e978081e 13 hours ago 997MB 2026-01-10 15:00:11.518163 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 966b1cebb92e 13 hours ago 996MB 2026-01-10 15:00:11.518168 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 5f533c242846 13 hours ago 997MB 2026-01-10 15:00:11.518174 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4cf1f2ebfaf4 13 hours ago 1.04GB 2026-01-10 15:00:11.518180 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a035d12eae22 13 hours ago 1.05GB 2026-01-10 15:00:11.518185 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 adaf0a51859f 13 hours ago 1.09GB 2026-01-10 15:00:11.518192 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a82ec71fa2e5 13 hours ago 846MB 2026-01-10 15:00:11.518198 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 89e445636e48 13 hours ago 846MB 2026-01-10 15:00:11.518219 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 8d9ab516e3e2 13 hours ago 846MB 2026-01-10 15:00:11.518225 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 4ddd4a3a84e6 13 hours ago 846MB 2026-01-10 15:00:11.937171 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:00:11.937441 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:00:12.009598 | orchestrator | 2026-01-10 15:00:12.009685 | orchestrator | ## Containers @ testbed-node-2 2026-01-10 15:00:12.009694 | orchestrator | 2026-01-10 15:00:12.009701 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:00:12.009707 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:00:12.009713 | orchestrator | + echo 2026-01-10 15:00:12.009719 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-10 15:00:12.009725 | orchestrator | + echo 2026-01-10 15:00:12.009731 | orchestrator | + osism container testbed-node-2 ps 2026-01-10 15:00:14.573391 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:00:14.573485 | orchestrator | 5bd56770e52c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:00:14.573494 | orchestrator | 23c5470cbce0 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:00:14.573499 | orchestrator | 8c34bbd8f256 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:00:14.573503 | orchestrator | 0e8d8416fab5 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:00:14.573507 | orchestrator | 9d077172c52f registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-01-10 15:00:14.573512 | orchestrator | 019a6247a00b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-10 15:00:14.573515 | orchestrator | 1c27301c342a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-10 15:00:14.573519 | orchestrator | 441b6ca2bb3d registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-10 15:00:14.573524 | orchestrator | a2159e6c3237 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-10 15:00:14.573528 | orchestrator | e5f74360e94f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:00:14.573531 | orchestrator | 492d36579909 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-10 15:00:14.573544 | orchestrator | 63d896b2ff26 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-10 15:00:14.573573 | orchestrator | 2c1eebc40373 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-10 15:00:14.573579 | orchestrator | cf44441c3dda registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-10 15:00:14.573583 | orchestrator | b6330f658234 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-10 15:00:14.573606 | orchestrator | 2223484ffc15 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2026-01-10 15:00:14.573612 | orchestrator | 68ec92fdadae registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-10 15:00:14.573616 | orchestrator | 32e1ef850236 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-10 15:00:14.573620 | orchestrator | 2c9ad3e714e5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-01-10 15:00:14.573624 | orchestrator | a411d667f17f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-01-10 15:00:14.573628 | orchestrator | 0539bb6f4856 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-10 15:00:14.573644 | orchestrator | b6d74aa99705 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-01-10 15:00:14.573648 | orchestrator | b3e122be943f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-10 15:00:14.573652 | orchestrator | 2b782a39fbc8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-10 15:00:14.573656 | orchestrator | bc2cac60c4aa registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-10 15:00:14.573660 | orchestrator | 642bec711932 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-10 15:00:14.573664 | orchestrator | 0f3fa3492998 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-10 15:00:14.573667 | orchestrator | bf4acf940282 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-01-10 15:00:14.573671 | orchestrator | a0173bc7bc79 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-10 15:00:14.573687 | orchestrator | 67ebca91e70e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-10 15:00:14.573691 | orchestrator | 6988f073178b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-10 15:00:14.573695 | orchestrator | b09054981f5e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-10 15:00:14.573699 | orchestrator | db299c536eef registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-10 15:00:14.573702 | orchestrator | 450178a4e531 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-10 15:00:14.573711 | orchestrator | 17a7fb45af9a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-10 15:00:14.573715 | orchestrator | 199f552f6450 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-10 15:00:14.573719 | orchestrator | 00ee51fbd109 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-10 15:00:14.573723 | orchestrator | 4596da099716 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-10 15:00:14.573729 | orchestrator | 124cde09c49e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-10 15:00:14.573733 | orchestrator | 8568c0f5776e registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-10 15:00:14.573737 | orchestrator | 91b3b3e31534 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-01-10 15:00:14.573741 | orchestrator | c93fcfc6be7c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2026-01-10 15:00:14.573745 | orchestrator | abde31f67e44 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-10 15:00:14.573748 | orchestrator | aa08417b0187 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-10 15:00:14.573755 | orchestrator | 18d40eb58794 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-10 15:00:14.573760 | orchestrator | b1b4ddb57bc6 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-01-10 15:00:14.573764 | orchestrator | 36bea1113138 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:00:14.573768 | orchestrator | 757bdd8c06b3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:00:14.573771 | orchestrator | 9d52b736b203 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-01-10 15:00:14.573775 | orchestrator | 9758d1cde95b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-10 15:00:14.573779 | orchestrator | c4ca6eb47ce9 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:00:14.573783 | orchestrator | c17dd5094a29 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:00:14.573787 | orchestrator | 940c3fb80c0e registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-10 15:00:14.573790 | orchestrator | 167f125299fe registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-10 15:00:14.573797 | orchestrator | 9a0b77d06570 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:00:14.573801 | orchestrator | 02657754e20d registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:00:14.573805 | orchestrator | fc3d5265bbf7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-10 15:00:14.573809 | orchestrator | 1a44e6dfb3f3 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:00:14.573813 | orchestrator | 977f0fc29459 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:00:14.973341 | orchestrator | 2026-01-10 15:00:14.973422 | orchestrator | ## Images @ testbed-node-2 2026-01-10 15:00:14.973433 | orchestrator | 2026-01-10 15:00:14.973439 | orchestrator | + echo 2026-01-10 15:00:14.973446 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-10 15:00:14.973454 | orchestrator | + echo 2026-01-10 15:00:14.973460 | orchestrator | + osism container testbed-node-2 images 2026-01-10 15:00:17.539594 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:00:17.539684 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:00:17.539692 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c6e6d76ef14c 13 hours ago 272MB 2026-01-10 15:00:17.539697 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 8c44feb2d7a6 13 hours ago 1.02GB 2026-01-10 15:00:17.539701 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fe54d9472228 13 hours ago 1.56GB 2026-01-10 15:00:17.539705 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 e8119f1424cc 13 hours ago 1.53GB 2026-01-10 15:00:17.539709 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7095452e5332 13 hours ago 271MB 2026-01-10 15:00:17.539713 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d4efb5144edd 13 hours ago 417MB 2026-01-10 15:00:17.539717 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f69b470de803 13 hours ago 585MB 2026-01-10 15:00:17.539721 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 738a7fd75c3a 13 hours ago 282MB 2026-01-10 15:00:17.539724 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 13ae22f9119b 13 hours ago 279MB 2026-01-10 15:00:17.539728 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d045f87980bf 13 hours ago 328MB 2026-01-10 15:00:17.539747 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 c4dc895df892 13 hours ago 675MB 2026-01-10 15:00:17.539751 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0656d122f10d 13 hours ago 278MB 2026-01-10 15:00:17.539754 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b8cddc89bea5 13 hours ago 278MB 2026-01-10 15:00:17.539758 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 6726b3688e46 13 hours ago 458MB 2026-01-10 15:00:17.539762 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5af3a8262f9d 13 hours ago 1.16GB 2026-01-10 15:00:17.539766 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9940050049be 13 hours ago 284MB 2026-01-10 15:00:17.539769 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 bd7e66d37dcb 13 hours ago 284MB 2026-01-10 15:00:17.539788 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a92e2c2998a1 13 hours ago 306MB 2026-01-10 15:00:17.539792 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b2f0f61513e9 13 hours ago 297MB 2026-01-10 15:00:17.539796 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f10a21348073 13 hours ago 304MB 2026-01-10 15:00:17.539799 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 47802b9af8e3 13 hours ago 311MB 2026-01-10 15:00:17.539803 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 189aa2300c26 13 hours ago 363MB 2026-01-10 15:00:17.539807 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 703f990809de 13 hours ago 1.06GB 2026-01-10 15:00:17.539811 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7377b333eb41 13 hours ago 1.03GB 2026-01-10 15:00:17.539814 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 40ef6a23eef3 13 hours ago 1.03GB 2026-01-10 15:00:17.539819 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 cd90406dbcbf 13 hours ago 1.06GB 2026-01-10 15:00:17.539825 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 2cedaca7de0f 13 hours ago 1.03GB 2026-01-10 15:00:17.539831 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7256fa6044bc 13 hours ago 1.22GB 2026-01-10 15:00:17.539837 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2c05f68e2ac3 13 hours ago 1.22GB 2026-01-10 15:00:17.539845 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 b4e2c918cceb 13 hours ago 1.37GB 2026-01-10 15:00:17.539850 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 77191a405166 13 hours ago 1.22GB 2026-01-10 15:00:17.539856 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 14e50a72c144 13 hours ago 1.1GB 2026-01-10 15:00:17.539863 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 4d29da55c510 13 hours ago 1.72GB 2026-01-10 15:00:17.539869 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 39030b3ffb1e 13 hours ago 1.41GB 2026-01-10 15:00:17.539876 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 3fbbe03b2464 13 hours ago 1.41GB 2026-01-10 15:00:17.539892 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 9cdebff3218e 13 hours ago 1.42GB 2026-01-10 15:00:17.539896 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 ff2a97ab03b1 13 hours ago 981MB 2026-01-10 15:00:17.539900 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 eb9f23d976a8 13 hours ago 1.25GB 2026-01-10 15:00:17.539907 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 c7206841ed53 13 hours ago 1.13GB 2026-01-10 15:00:17.539911 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d35919d11266 13 hours ago 1.17GB 2026-01-10 15:00:17.539915 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 eb35cf201ba2 13 hours ago 989MB 2026-01-10 15:00:17.539919 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 091aa5c747a0 13 hours ago 990MB 2026-01-10 15:00:17.539922 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d9179f8dd840 13 hours ago 994MB 2026-01-10 15:00:17.539926 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 214353d5e35e 13 hours ago 990MB 2026-01-10 15:00:17.539930 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 036b811dc775 13 hours ago 990MB 2026-01-10 15:00:17.539933 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 eb0c61e2a22c 13 hours ago 994MB 2026-01-10 15:00:17.539937 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9ef5e978081e 13 hours ago 997MB 2026-01-10 15:00:17.539945 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 966b1cebb92e 13 hours ago 996MB 2026-01-10 15:00:17.539949 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 5f533c242846 13 hours ago 997MB 2026-01-10 15:00:17.539952 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4cf1f2ebfaf4 13 hours ago 1.04GB 2026-01-10 15:00:17.539956 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a035d12eae22 13 hours ago 1.05GB 2026-01-10 15:00:17.539960 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 adaf0a51859f 13 hours ago 1.09GB 2026-01-10 15:00:17.539963 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a82ec71fa2e5 13 hours ago 846MB 2026-01-10 15:00:17.539967 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 89e445636e48 13 hours ago 846MB 2026-01-10 15:00:17.539971 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 8d9ab516e3e2 13 hours ago 846MB 2026-01-10 15:00:17.539974 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 4ddd4a3a84e6 13 hours ago 846MB 2026-01-10 15:00:17.974149 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-10 15:00:17.984461 | orchestrator | + set -e 2026-01-10 15:00:17.984545 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:00:17.986277 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:00:17.986344 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:00:17.986351 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:00:17.986356 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:00:17.986361 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:00:17.986366 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:00:17.986371 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:00:17.986376 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:00:17.986380 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 15:00:17.986385 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 15:00:17.986389 | orchestrator | ++ export ARA=false 2026-01-10 15:00:17.986394 | orchestrator | ++ ARA=false 2026-01-10 15:00:17.986398 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:00:17.986403 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:00:17.986407 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:00:17.986411 | orchestrator | ++ TEMPEST=false 2026-01-10 15:00:17.986415 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:00:17.986419 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:00:17.986424 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 15:00:17.986428 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 15:00:17.986432 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:00:17.986436 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:00:17.986440 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:00:17.986444 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:00:17.986448 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:00:17.986452 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:00:17.986459 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:00:17.986465 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:00:17.986472 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 15:00:17.986478 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-10 15:00:17.996434 | orchestrator | + set -e 2026-01-10 15:00:17.996500 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:00:17.996508 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:00:17.996514 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:00:17.996519 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:00:17.996524 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:00:17.996529 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:00:17.998206 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:00:18.004635 | orchestrator | 2026-01-10 15:00:18.004703 | orchestrator | # Ceph status 2026-01-10 15:00:18.004710 | orchestrator | 2026-01-10 15:00:18.004715 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:00:18.004721 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:00:18.004726 | orchestrator | + echo 2026-01-10 15:00:18.004731 | orchestrator | + echo '# Ceph status' 2026-01-10 15:00:18.004736 | orchestrator | + echo 2026-01-10 15:00:18.004761 | orchestrator | + ceph -s 2026-01-10 15:00:18.603123 | orchestrator | cluster: 2026-01-10 15:00:18.603197 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-10 15:00:18.603206 | orchestrator | health: HEALTH_OK 2026-01-10 15:00:18.603211 | orchestrator | 2026-01-10 15:00:18.603217 | orchestrator | services: 2026-01-10 15:00:18.603222 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-01-10 15:00:18.603229 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2026-01-10 15:00:18.603235 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-10 15:00:18.603240 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2026-01-10 15:00:18.603246 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-10 15:00:18.603251 | orchestrator | 2026-01-10 15:00:18.603256 | orchestrator | data: 2026-01-10 15:00:18.603261 | orchestrator | volumes: 1/1 healthy 2026-01-10 15:00:18.603266 | orchestrator | pools: 14 pools, 401 pgs 2026-01-10 15:00:18.603271 | orchestrator | objects: 524 objects, 2.2 GiB 2026-01-10 15:00:18.603276 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-10 15:00:18.603281 | orchestrator | pgs: 401 active+clean 2026-01-10 15:00:18.603285 | orchestrator | 2026-01-10 15:00:18.648995 | orchestrator | 2026-01-10 15:00:18.649132 | orchestrator | # Ceph versions 2026-01-10 15:00:18.649150 | orchestrator | 2026-01-10 15:00:18.649159 | orchestrator | + echo 2026-01-10 15:00:18.649169 | orchestrator | + echo '# Ceph versions' 2026-01-10 15:00:18.649178 | orchestrator | + echo 2026-01-10 15:00:18.649187 | orchestrator | + ceph versions 2026-01-10 15:00:19.285778 | orchestrator | { 2026-01-10 15:00:19.285877 | orchestrator | "mon": { 2026-01-10 15:00:19.285885 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:00:19.285890 | orchestrator | }, 2026-01-10 15:00:19.285894 | orchestrator | "mgr": { 2026-01-10 15:00:19.285899 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:00:19.285903 | orchestrator | }, 2026-01-10 15:00:19.285907 | orchestrator | "osd": { 2026-01-10 15:00:19.285911 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-10 15:00:19.285915 | orchestrator | }, 2026-01-10 15:00:19.285918 | orchestrator | "mds": { 2026-01-10 15:00:19.285922 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:00:19.285926 | orchestrator | }, 2026-01-10 15:00:19.285930 | orchestrator | "rgw": { 2026-01-10 15:00:19.285950 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:00:19.285954 | orchestrator | }, 2026-01-10 15:00:19.285958 | orchestrator | "overall": { 2026-01-10 15:00:19.285962 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-10 15:00:19.285966 | orchestrator | } 2026-01-10 15:00:19.285970 | orchestrator | } 2026-01-10 15:00:19.343584 | orchestrator | 2026-01-10 15:00:19.343669 | orchestrator | # Ceph OSD tree 2026-01-10 15:00:19.343678 | orchestrator | 2026-01-10 15:00:19.343686 | orchestrator | + echo 2026-01-10 15:00:19.343694 | orchestrator | + echo '# Ceph OSD tree' 2026-01-10 15:00:19.343701 | orchestrator | + echo 2026-01-10 15:00:19.343708 | orchestrator | + ceph osd df tree 2026-01-10 15:00:19.868214 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-10 15:00:19.868304 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-10 15:00:19.868311 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-10 15:00:19.868316 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1008 MiB 939 MiB 1 KiB 70 MiB 19 GiB 4.93 0.83 189 up osd.0 2026-01-10 15:00:19.868320 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.90 1.17 201 up osd.3 2026-01-10 15:00:19.868325 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-10 15:00:19.868329 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.75 1.14 199 up osd.2 2026-01-10 15:00:19.868354 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 70 MiB 19 GiB 5.08 0.86 189 up osd.4 2026-01-10 15:00:19.868358 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-10 15:00:19.868362 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.71 0.97 193 up osd.1 2026-01-10 15:00:19.868366 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.12 1.03 199 up osd.5 2026-01-10 15:00:19.868370 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-10 15:00:19.868374 | orchestrator | MIN/MAX VAR: 0.83/1.17 STDDEV: 0.75 2026-01-10 15:00:19.918799 | orchestrator | 2026-01-10 15:00:19.918888 | orchestrator | # Ceph monitor status 2026-01-10 15:00:19.918899 | orchestrator | 2026-01-10 15:00:19.918906 | orchestrator | + echo 2026-01-10 15:00:19.918913 | orchestrator | + echo '# Ceph monitor status' 2026-01-10 15:00:19.918919 | orchestrator | + echo 2026-01-10 15:00:19.918925 | orchestrator | + ceph mon stat 2026-01-10 15:00:20.514267 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-10 15:00:20.572410 | orchestrator | 2026-01-10 15:00:20.572546 | orchestrator | # Ceph quorum status 2026-01-10 15:00:20.572558 | orchestrator | 2026-01-10 15:00:20.572566 | orchestrator | + echo 2026-01-10 15:00:20.572573 | orchestrator | + echo '# Ceph quorum status' 2026-01-10 15:00:20.572579 | orchestrator | + echo 2026-01-10 15:00:20.572638 | orchestrator | + ceph quorum_status 2026-01-10 15:00:20.573769 | orchestrator | + jq 2026-01-10 15:00:21.280144 | orchestrator | { 2026-01-10 15:00:21.280240 | orchestrator | "election_epoch": 6, 2026-01-10 15:00:21.280250 | orchestrator | "quorum": [ 2026-01-10 15:00:21.280256 | orchestrator | 0, 2026-01-10 15:00:21.280262 | orchestrator | 1, 2026-01-10 15:00:21.280269 | orchestrator | 2 2026-01-10 15:00:21.280275 | orchestrator | ], 2026-01-10 15:00:21.280282 | orchestrator | "quorum_names": [ 2026-01-10 15:00:21.280288 | orchestrator | "testbed-node-0", 2026-01-10 15:00:21.280294 | orchestrator | "testbed-node-1", 2026-01-10 15:00:21.280301 | orchestrator | "testbed-node-2" 2026-01-10 15:00:21.280308 | orchestrator | ], 2026-01-10 15:00:21.280315 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-10 15:00:21.280323 | orchestrator | "quorum_age": 1710, 2026-01-10 15:00:21.280330 | orchestrator | "features": { 2026-01-10 15:00:21.280334 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-10 15:00:21.280338 | orchestrator | "quorum_mon": [ 2026-01-10 15:00:21.280342 | orchestrator | "kraken", 2026-01-10 15:00:21.280346 | orchestrator | "luminous", 2026-01-10 15:00:21.280350 | orchestrator | "mimic", 2026-01-10 15:00:21.280354 | orchestrator | "osdmap-prune", 2026-01-10 15:00:21.280358 | orchestrator | "nautilus", 2026-01-10 15:00:21.280362 | orchestrator | "octopus", 2026-01-10 15:00:21.280366 | orchestrator | "pacific", 2026-01-10 15:00:21.280370 | orchestrator | "elector-pinging", 2026-01-10 15:00:21.280374 | orchestrator | "quincy", 2026-01-10 15:00:21.280378 | orchestrator | "reef" 2026-01-10 15:00:21.280382 | orchestrator | ] 2026-01-10 15:00:21.280385 | orchestrator | }, 2026-01-10 15:00:21.280389 | orchestrator | "monmap": { 2026-01-10 15:00:21.280393 | orchestrator | "epoch": 1, 2026-01-10 15:00:21.280397 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-10 15:00:21.280402 | orchestrator | "modified": "2026-01-10T14:31:28.985956Z", 2026-01-10 15:00:21.280406 | orchestrator | "created": "2026-01-10T14:31:28.985956Z", 2026-01-10 15:00:21.280410 | orchestrator | "min_mon_release": 18, 2026-01-10 15:00:21.280414 | orchestrator | "min_mon_release_name": "reef", 2026-01-10 15:00:21.280418 | orchestrator | "election_strategy": 1, 2026-01-10 15:00:21.280422 | orchestrator | "disallowed_leaders: ": "", 2026-01-10 15:00:21.280426 | orchestrator | "stretch_mode": false, 2026-01-10 15:00:21.280429 | orchestrator | "tiebreaker_mon": "", 2026-01-10 15:00:21.280433 | orchestrator | "removed_ranks: ": "", 2026-01-10 15:00:21.280437 | orchestrator | "features": { 2026-01-10 15:00:21.280441 | orchestrator | "persistent": [ 2026-01-10 15:00:21.280445 | orchestrator | "kraken", 2026-01-10 15:00:21.280448 | orchestrator | "luminous", 2026-01-10 15:00:21.280475 | orchestrator | "mimic", 2026-01-10 15:00:21.280479 | orchestrator | "osdmap-prune", 2026-01-10 15:00:21.280482 | orchestrator | "nautilus", 2026-01-10 15:00:21.280486 | orchestrator | "octopus", 2026-01-10 15:00:21.280490 | orchestrator | "pacific", 2026-01-10 15:00:21.280493 | orchestrator | "elector-pinging", 2026-01-10 15:00:21.280497 | orchestrator | "quincy", 2026-01-10 15:00:21.280501 | orchestrator | "reef" 2026-01-10 15:00:21.280505 | orchestrator | ], 2026-01-10 15:00:21.280508 | orchestrator | "optional": [] 2026-01-10 15:00:21.280512 | orchestrator | }, 2026-01-10 15:00:21.280516 | orchestrator | "mons": [ 2026-01-10 15:00:21.280519 | orchestrator | { 2026-01-10 15:00:21.280523 | orchestrator | "rank": 0, 2026-01-10 15:00:21.280527 | orchestrator | "name": "testbed-node-0", 2026-01-10 15:00:21.280531 | orchestrator | "public_addrs": { 2026-01-10 15:00:21.280535 | orchestrator | "addrvec": [ 2026-01-10 15:00:21.280538 | orchestrator | { 2026-01-10 15:00:21.280552 | orchestrator | "type": "v2", 2026-01-10 15:00:21.280556 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-10 15:00:21.280560 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280564 | orchestrator | }, 2026-01-10 15:00:21.280568 | orchestrator | { 2026-01-10 15:00:21.280571 | orchestrator | "type": "v1", 2026-01-10 15:00:21.280575 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-10 15:00:21.280579 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280583 | orchestrator | } 2026-01-10 15:00:21.280586 | orchestrator | ] 2026-01-10 15:00:21.280590 | orchestrator | }, 2026-01-10 15:00:21.280594 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-10 15:00:21.280598 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-10 15:00:21.280602 | orchestrator | "priority": 0, 2026-01-10 15:00:21.280605 | orchestrator | "weight": 0, 2026-01-10 15:00:21.280609 | orchestrator | "crush_location": "{}" 2026-01-10 15:00:21.280613 | orchestrator | }, 2026-01-10 15:00:21.280616 | orchestrator | { 2026-01-10 15:00:21.280620 | orchestrator | "rank": 1, 2026-01-10 15:00:21.280624 | orchestrator | "name": "testbed-node-1", 2026-01-10 15:00:21.280628 | orchestrator | "public_addrs": { 2026-01-10 15:00:21.280631 | orchestrator | "addrvec": [ 2026-01-10 15:00:21.280635 | orchestrator | { 2026-01-10 15:00:21.280639 | orchestrator | "type": "v2", 2026-01-10 15:00:21.280643 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-10 15:00:21.280646 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280650 | orchestrator | }, 2026-01-10 15:00:21.280654 | orchestrator | { 2026-01-10 15:00:21.280657 | orchestrator | "type": "v1", 2026-01-10 15:00:21.280661 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-10 15:00:21.280665 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280669 | orchestrator | } 2026-01-10 15:00:21.280672 | orchestrator | ] 2026-01-10 15:00:21.280676 | orchestrator | }, 2026-01-10 15:00:21.280680 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-10 15:00:21.280684 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-10 15:00:21.280687 | orchestrator | "priority": 0, 2026-01-10 15:00:21.280691 | orchestrator | "weight": 0, 2026-01-10 15:00:21.280695 | orchestrator | "crush_location": "{}" 2026-01-10 15:00:21.280698 | orchestrator | }, 2026-01-10 15:00:21.280702 | orchestrator | { 2026-01-10 15:00:21.280706 | orchestrator | "rank": 2, 2026-01-10 15:00:21.280709 | orchestrator | "name": "testbed-node-2", 2026-01-10 15:00:21.280713 | orchestrator | "public_addrs": { 2026-01-10 15:00:21.280717 | orchestrator | "addrvec": [ 2026-01-10 15:00:21.280720 | orchestrator | { 2026-01-10 15:00:21.280724 | orchestrator | "type": "v2", 2026-01-10 15:00:21.280728 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-10 15:00:21.280731 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280735 | orchestrator | }, 2026-01-10 15:00:21.280739 | orchestrator | { 2026-01-10 15:00:21.280743 | orchestrator | "type": "v1", 2026-01-10 15:00:21.280746 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-10 15:00:21.280750 | orchestrator | "nonce": 0 2026-01-10 15:00:21.280754 | orchestrator | } 2026-01-10 15:00:21.280757 | orchestrator | ] 2026-01-10 15:00:21.280761 | orchestrator | }, 2026-01-10 15:00:21.280765 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-10 15:00:21.280769 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-10 15:00:21.280772 | orchestrator | "priority": 0, 2026-01-10 15:00:21.280788 | orchestrator | "weight": 0, 2026-01-10 15:00:21.280792 | orchestrator | "crush_location": "{}" 2026-01-10 15:00:21.280796 | orchestrator | } 2026-01-10 15:00:21.280800 | orchestrator | ] 2026-01-10 15:00:21.280804 | orchestrator | } 2026-01-10 15:00:21.280807 | orchestrator | } 2026-01-10 15:00:21.280901 | orchestrator | 2026-01-10 15:00:21.280907 | orchestrator | # Ceph free space status 2026-01-10 15:00:21.280911 | orchestrator | 2026-01-10 15:00:21.280915 | orchestrator | + echo 2026-01-10 15:00:21.280918 | orchestrator | + echo '# Ceph free space status' 2026-01-10 15:00:21.280922 | orchestrator | + echo 2026-01-10 15:00:21.280926 | orchestrator | + ceph df 2026-01-10 15:00:21.903962 | orchestrator | --- RAW STORAGE --- 2026-01-10 15:00:21.904037 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-10 15:00:21.904053 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:00:21.904057 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:00:21.904062 | orchestrator | 2026-01-10 15:00:21.904067 | orchestrator | --- POOLS --- 2026-01-10 15:00:21.904071 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-10 15:00:21.904125 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-01-10 15:00:21.904130 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:00:21.904135 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-10 15:00:21.904139 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:00:21.904143 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:00:21.904147 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-10 15:00:21.904151 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2026-01-10 15:00:21.904155 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:00:21.904159 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-01-10 15:00:21.904162 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:00:21.904166 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:00:21.904170 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-01-10 15:00:21.904173 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:00:21.904177 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:00:21.953300 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:00:22.026461 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:00:22.026547 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:00:22.026557 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-10 15:00:22.026564 | orchestrator | + osism apply facts 2026-01-10 15:00:24.215291 | orchestrator | 2026-01-10 15:00:24 | INFO  | Task 24379e66-1e6d-4e73-b377-6a05cc6f4d33 (facts) was prepared for execution. 2026-01-10 15:00:24.215380 | orchestrator | 2026-01-10 15:00:24 | INFO  | It takes a moment until task 24379e66-1e6d-4e73-b377-6a05cc6f4d33 (facts) has been started and output is visible here. 2026-01-10 15:00:39.122321 | orchestrator | 2026-01-10 15:00:39.122422 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 15:00:39.122430 | orchestrator | 2026-01-10 15:00:39.122435 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 15:00:39.122439 | orchestrator | Saturday 10 January 2026 15:00:29 +0000 (0:00:00.297) 0:00:00.297 ****** 2026-01-10 15:00:39.122444 | orchestrator | ok: [testbed-manager] 2026-01-10 15:00:39.122448 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:00:39.122453 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:00:39.122457 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:00:39.122461 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:00:39.122465 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:00:39.122469 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:00:39.122473 | orchestrator | 2026-01-10 15:00:39.122477 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 15:00:39.122502 | orchestrator | Saturday 10 January 2026 15:00:30 +0000 (0:00:01.648) 0:00:01.946 ****** 2026-01-10 15:00:39.122510 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:00:39.122520 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:00:39.122526 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:00:39.122532 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:00:39.122537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:00:39.122543 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:00:39.122549 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:00:39.122555 | orchestrator | 2026-01-10 15:00:39.122561 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 15:00:39.122567 | orchestrator | 2026-01-10 15:00:39.122573 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 15:00:39.122578 | orchestrator | Saturday 10 January 2026 15:00:32 +0000 (0:00:01.446) 0:00:03.392 ****** 2026-01-10 15:00:39.122584 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:00:39.122590 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:00:39.122596 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:00:39.122635 | orchestrator | ok: [testbed-manager] 2026-01-10 15:00:39.122642 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:00:39.122649 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:00:39.122653 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:00:39.122657 | orchestrator | 2026-01-10 15:00:39.122661 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 15:00:39.122664 | orchestrator | 2026-01-10 15:00:39.122669 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 15:00:39.122673 | orchestrator | Saturday 10 January 2026 15:00:37 +0000 (0:00:05.611) 0:00:09.004 ****** 2026-01-10 15:00:39.122676 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:00:39.122680 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:00:39.122684 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:00:39.122688 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:00:39.122692 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:00:39.122696 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:00:39.122699 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:00:39.122703 | orchestrator | 2026-01-10 15:00:39.122707 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:00:39.122712 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122717 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122721 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122724 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122728 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122732 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122736 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:00:39.122739 | orchestrator | 2026-01-10 15:00:39.122743 | orchestrator | 2026-01-10 15:00:39.122747 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:00:39.122751 | orchestrator | Saturday 10 January 2026 15:00:38 +0000 (0:00:00.639) 0:00:09.644 ****** 2026-01-10 15:00:39.122755 | orchestrator | =============================================================================== 2026-01-10 15:00:39.122764 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.61s 2026-01-10 15:00:39.122768 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.65s 2026-01-10 15:00:39.122772 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2026-01-10 15:00:39.122776 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-01-10 15:00:39.554801 | orchestrator | + osism validate ceph-mons 2026-01-10 15:01:14.458354 | orchestrator | 2026-01-10 15:01:14.458565 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-10 15:01:14.458584 | orchestrator | 2026-01-10 15:01:14.458591 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:01:14.458596 | orchestrator | Saturday 10 January 2026 15:00:56 +0000 (0:00:00.455) 0:00:00.455 ****** 2026-01-10 15:01:14.458610 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.458614 | orchestrator | 2026-01-10 15:01:14.458618 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:01:14.458623 | orchestrator | Saturday 10 January 2026 15:00:57 +0000 (0:00:00.941) 0:00:01.397 ****** 2026-01-10 15:01:14.458627 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.458631 | orchestrator | 2026-01-10 15:01:14.458635 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:01:14.458638 | orchestrator | Saturday 10 January 2026 15:00:58 +0000 (0:00:01.222) 0:00:02.619 ****** 2026-01-10 15:01:14.458643 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458647 | orchestrator | 2026-01-10 15:01:14.458651 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:01:14.458655 | orchestrator | Saturday 10 January 2026 15:00:58 +0000 (0:00:00.139) 0:00:02.759 ****** 2026-01-10 15:01:14.458659 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458663 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:14.458668 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:14.458674 | orchestrator | 2026-01-10 15:01:14.458680 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:01:14.458703 | orchestrator | Saturday 10 January 2026 15:00:59 +0000 (0:00:00.319) 0:00:03.078 ****** 2026-01-10 15:01:14.458714 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:14.458721 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:14.458728 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458734 | orchestrator | 2026-01-10 15:01:14.458740 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:01:14.458746 | orchestrator | Saturday 10 January 2026 15:01:00 +0000 (0:00:01.297) 0:00:04.376 ****** 2026-01-10 15:01:14.458753 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.458760 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:01:14.458766 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:01:14.458773 | orchestrator | 2026-01-10 15:01:14.458780 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:01:14.458786 | orchestrator | Saturday 10 January 2026 15:01:00 +0000 (0:00:00.341) 0:00:04.717 ****** 2026-01-10 15:01:14.458792 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458799 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:14.458805 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:14.458812 | orchestrator | 2026-01-10 15:01:14.458819 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:01:14.458826 | orchestrator | Saturday 10 January 2026 15:01:01 +0000 (0:00:00.524) 0:00:05.242 ****** 2026-01-10 15:01:14.458832 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458838 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:14.458845 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:14.458852 | orchestrator | 2026-01-10 15:01:14.458859 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-10 15:01:14.458867 | orchestrator | Saturday 10 January 2026 15:01:01 +0000 (0:00:00.324) 0:00:05.566 ****** 2026-01-10 15:01:14.458874 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.458901 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:01:14.458909 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:01:14.458913 | orchestrator | 2026-01-10 15:01:14.458917 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-10 15:01:14.458922 | orchestrator | Saturday 10 January 2026 15:01:01 +0000 (0:00:00.335) 0:00:05.901 ****** 2026-01-10 15:01:14.458927 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.458931 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:14.458935 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:14.458940 | orchestrator | 2026-01-10 15:01:14.458944 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:01:14.458948 | orchestrator | Saturday 10 January 2026 15:01:02 +0000 (0:00:00.699) 0:00:06.601 ****** 2026-01-10 15:01:14.458952 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.458956 | orchestrator | 2026-01-10 15:01:14.458961 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:01:14.458965 | orchestrator | Saturday 10 January 2026 15:01:02 +0000 (0:00:00.269) 0:00:06.871 ****** 2026-01-10 15:01:14.458969 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.458974 | orchestrator | 2026-01-10 15:01:14.458979 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:01:14.458984 | orchestrator | Saturday 10 January 2026 15:01:03 +0000 (0:00:00.280) 0:00:07.151 ****** 2026-01-10 15:01:14.458988 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.458992 | orchestrator | 2026-01-10 15:01:14.458996 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:14.459001 | orchestrator | Saturday 10 January 2026 15:01:03 +0000 (0:00:00.300) 0:00:07.452 ****** 2026-01-10 15:01:14.459005 | orchestrator | 2026-01-10 15:01:14.459009 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:14.459014 | orchestrator | Saturday 10 January 2026 15:01:03 +0000 (0:00:00.116) 0:00:07.568 ****** 2026-01-10 15:01:14.459018 | orchestrator | 2026-01-10 15:01:14.459022 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:14.459026 | orchestrator | Saturday 10 January 2026 15:01:03 +0000 (0:00:00.090) 0:00:07.659 ****** 2026-01-10 15:01:14.459030 | orchestrator | 2026-01-10 15:01:14.459035 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:01:14.459039 | orchestrator | Saturday 10 January 2026 15:01:03 +0000 (0:00:00.107) 0:00:07.766 ****** 2026-01-10 15:01:14.459043 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459048 | orchestrator | 2026-01-10 15:01:14.459052 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:01:14.459057 | orchestrator | Saturday 10 January 2026 15:01:04 +0000 (0:00:00.275) 0:00:08.042 ****** 2026-01-10 15:01:14.459061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459065 | orchestrator | 2026-01-10 15:01:14.459085 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-10 15:01:14.459090 | orchestrator | Saturday 10 January 2026 15:01:04 +0000 (0:00:00.263) 0:00:08.306 ****** 2026-01-10 15:01:14.459094 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459099 | orchestrator | 2026-01-10 15:01:14.459103 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-10 15:01:14.459107 | orchestrator | Saturday 10 January 2026 15:01:04 +0000 (0:00:00.130) 0:00:08.436 ****** 2026-01-10 15:01:14.459112 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:01:14.459116 | orchestrator | 2026-01-10 15:01:14.459120 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-10 15:01:14.459125 | orchestrator | Saturday 10 January 2026 15:01:06 +0000 (0:00:01.869) 0:00:10.306 ****** 2026-01-10 15:01:14.459129 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459133 | orchestrator | 2026-01-10 15:01:14.459138 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-10 15:01:14.459142 | orchestrator | Saturday 10 January 2026 15:01:07 +0000 (0:00:00.716) 0:00:11.022 ****** 2026-01-10 15:01:14.459151 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459156 | orchestrator | 2026-01-10 15:01:14.459160 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-10 15:01:14.459165 | orchestrator | Saturday 10 January 2026 15:01:07 +0000 (0:00:00.138) 0:00:11.160 ****** 2026-01-10 15:01:14.459169 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459173 | orchestrator | 2026-01-10 15:01:14.459178 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-10 15:01:14.459183 | orchestrator | Saturday 10 January 2026 15:01:07 +0000 (0:00:00.443) 0:00:11.604 ****** 2026-01-10 15:01:14.459187 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459192 | orchestrator | 2026-01-10 15:01:14.459196 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-10 15:01:14.459200 | orchestrator | Saturday 10 January 2026 15:01:08 +0000 (0:00:00.368) 0:00:11.972 ****** 2026-01-10 15:01:14.459205 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459209 | orchestrator | 2026-01-10 15:01:14.459213 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-10 15:01:14.459218 | orchestrator | Saturday 10 January 2026 15:01:08 +0000 (0:00:00.131) 0:00:12.104 ****** 2026-01-10 15:01:14.459222 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459226 | orchestrator | 2026-01-10 15:01:14.459249 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-10 15:01:14.459255 | orchestrator | Saturday 10 January 2026 15:01:08 +0000 (0:00:00.142) 0:00:12.246 ****** 2026-01-10 15:01:14.459261 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459267 | orchestrator | 2026-01-10 15:01:14.459273 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-10 15:01:14.459279 | orchestrator | Saturday 10 January 2026 15:01:08 +0000 (0:00:00.139) 0:00:12.386 ****** 2026-01-10 15:01:14.459284 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:01:14.459288 | orchestrator | 2026-01-10 15:01:14.459292 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-10 15:01:14.459296 | orchestrator | Saturday 10 January 2026 15:01:09 +0000 (0:00:01.521) 0:00:13.907 ****** 2026-01-10 15:01:14.459300 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459303 | orchestrator | 2026-01-10 15:01:14.459307 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-10 15:01:14.459311 | orchestrator | Saturday 10 January 2026 15:01:10 +0000 (0:00:00.364) 0:00:14.272 ****** 2026-01-10 15:01:14.459315 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459318 | orchestrator | 2026-01-10 15:01:14.459322 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-10 15:01:14.459326 | orchestrator | Saturday 10 January 2026 15:01:10 +0000 (0:00:00.176) 0:00:14.449 ****** 2026-01-10 15:01:14.459330 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:14.459333 | orchestrator | 2026-01-10 15:01:14.459337 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-10 15:01:14.459341 | orchestrator | Saturday 10 January 2026 15:01:10 +0000 (0:00:00.151) 0:00:14.601 ****** 2026-01-10 15:01:14.459344 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459348 | orchestrator | 2026-01-10 15:01:14.459352 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-10 15:01:14.459355 | orchestrator | Saturday 10 January 2026 15:01:11 +0000 (0:00:00.413) 0:00:15.014 ****** 2026-01-10 15:01:14.459359 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459363 | orchestrator | 2026-01-10 15:01:14.459373 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:01:14.459377 | orchestrator | Saturday 10 January 2026 15:01:11 +0000 (0:00:00.153) 0:00:15.168 ****** 2026-01-10 15:01:14.459381 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.459385 | orchestrator | 2026-01-10 15:01:14.459389 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:01:14.459393 | orchestrator | Saturday 10 January 2026 15:01:11 +0000 (0:00:00.343) 0:00:15.511 ****** 2026-01-10 15:01:14.459400 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:14.459404 | orchestrator | 2026-01-10 15:01:14.459411 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:01:14.459415 | orchestrator | Saturday 10 January 2026 15:01:11 +0000 (0:00:00.263) 0:00:15.775 ****** 2026-01-10 15:01:14.459418 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.459422 | orchestrator | 2026-01-10 15:01:14.459426 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:01:14.459430 | orchestrator | Saturday 10 January 2026 15:01:13 +0000 (0:00:01.855) 0:00:17.630 ****** 2026-01-10 15:01:14.459433 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.459437 | orchestrator | 2026-01-10 15:01:14.459441 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:01:14.459445 | orchestrator | Saturday 10 January 2026 15:01:13 +0000 (0:00:00.269) 0:00:17.900 ****** 2026-01-10 15:01:14.459448 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:14.459452 | orchestrator | 2026-01-10 15:01:14.459460 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:17.350315 | orchestrator | Saturday 10 January 2026 15:01:14 +0000 (0:00:00.263) 0:00:18.163 ****** 2026-01-10 15:01:17.350426 | orchestrator | 2026-01-10 15:01:17.350441 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:17.350450 | orchestrator | Saturday 10 January 2026 15:01:14 +0000 (0:00:00.071) 0:00:18.235 ****** 2026-01-10 15:01:17.350459 | orchestrator | 2026-01-10 15:01:17.350467 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:17.350475 | orchestrator | Saturday 10 January 2026 15:01:14 +0000 (0:00:00.069) 0:00:18.305 ****** 2026-01-10 15:01:17.350484 | orchestrator | 2026-01-10 15:01:17.350492 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:01:17.350499 | orchestrator | Saturday 10 January 2026 15:01:14 +0000 (0:00:00.073) 0:00:18.379 ****** 2026-01-10 15:01:17.350509 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:17.350516 | orchestrator | 2026-01-10 15:01:17.350521 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:01:17.350526 | orchestrator | Saturday 10 January 2026 15:01:16 +0000 (0:00:01.607) 0:00:19.987 ****** 2026-01-10 15:01:17.350530 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:01:17.350536 | orchestrator |  "msg": [ 2026-01-10 15:01:17.350542 | orchestrator |  "Validator run completed.", 2026-01-10 15:01:17.350547 | orchestrator |  "You can find the report file here:", 2026-01-10 15:01:17.350568 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-10T15:00:57+00:00-report.json", 2026-01-10 15:01:17.350574 | orchestrator |  "on the following host:", 2026-01-10 15:01:17.350579 | orchestrator |  "testbed-manager" 2026-01-10 15:01:17.350584 | orchestrator |  ] 2026-01-10 15:01:17.350589 | orchestrator | } 2026-01-10 15:01:17.350594 | orchestrator | 2026-01-10 15:01:17.350599 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:01:17.350605 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 15:01:17.350611 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:01:17.350616 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:01:17.350621 | orchestrator | 2026-01-10 15:01:17.350625 | orchestrator | 2026-01-10 15:01:17.350630 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:01:17.350635 | orchestrator | Saturday 10 January 2026 15:01:16 +0000 (0:00:00.875) 0:00:20.863 ****** 2026-01-10 15:01:17.350667 | orchestrator | =============================================================================== 2026-01-10 15:01:17.350672 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.87s 2026-01-10 15:01:17.350677 | orchestrator | Aggregate test results step one ----------------------------------------- 1.86s 2026-01-10 15:01:17.350682 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2026-01-10 15:01:17.350686 | orchestrator | Gather status data ------------------------------------------------------ 1.52s 2026-01-10 15:01:17.350691 | orchestrator | Get container info ------------------------------------------------------ 1.30s 2026-01-10 15:01:17.350695 | orchestrator | Create report output directory ------------------------------------------ 1.22s 2026-01-10 15:01:17.350700 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-01-10 15:01:17.350704 | orchestrator | Print report file information ------------------------------------------- 0.88s 2026-01-10 15:01:17.350709 | orchestrator | Set quorum test data ---------------------------------------------------- 0.72s 2026-01-10 15:01:17.350713 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.70s 2026-01-10 15:01:17.350718 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-01-10 15:01:17.350722 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.44s 2026-01-10 15:01:17.350727 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.41s 2026-01-10 15:01:17.350731 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.37s 2026-01-10 15:01:17.350736 | orchestrator | Set health test data ---------------------------------------------------- 0.36s 2026-01-10 15:01:17.350741 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.34s 2026-01-10 15:01:17.350749 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2026-01-10 15:01:17.350757 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2026-01-10 15:01:17.350763 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-01-10 15:01:17.350770 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-01-10 15:01:17.773039 | orchestrator | + osism validate ceph-mgrs 2026-01-10 15:01:50.856543 | orchestrator | 2026-01-10 15:01:50.856644 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-10 15:01:50.856656 | orchestrator | 2026-01-10 15:01:50.856663 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:01:50.856670 | orchestrator | Saturday 10 January 2026 15:01:34 +0000 (0:00:00.488) 0:00:00.488 ****** 2026-01-10 15:01:50.856678 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.856685 | orchestrator | 2026-01-10 15:01:50.856692 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:01:50.856698 | orchestrator | Saturday 10 January 2026 15:01:35 +0000 (0:00:00.897) 0:00:01.385 ****** 2026-01-10 15:01:50.856705 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.856712 | orchestrator | 2026-01-10 15:01:50.856719 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:01:50.856727 | orchestrator | Saturday 10 January 2026 15:01:36 +0000 (0:00:01.105) 0:00:02.491 ****** 2026-01-10 15:01:50.856733 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.856741 | orchestrator | 2026-01-10 15:01:50.856748 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:01:50.856755 | orchestrator | Saturday 10 January 2026 15:01:37 +0000 (0:00:00.150) 0:00:02.641 ****** 2026-01-10 15:01:50.856762 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.856769 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:50.856775 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:50.856781 | orchestrator | 2026-01-10 15:01:50.856788 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:01:50.856819 | orchestrator | Saturday 10 January 2026 15:01:37 +0000 (0:00:00.308) 0:00:02.950 ****** 2026-01-10 15:01:50.856826 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:50.856833 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:50.856839 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.856845 | orchestrator | 2026-01-10 15:01:50.856851 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:01:50.856858 | orchestrator | Saturday 10 January 2026 15:01:38 +0000 (0:00:01.180) 0:00:04.130 ****** 2026-01-10 15:01:50.856864 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.856871 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:01:50.856878 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:01:50.856885 | orchestrator | 2026-01-10 15:01:50.856906 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:01:50.856913 | orchestrator | Saturday 10 January 2026 15:01:38 +0000 (0:00:00.326) 0:00:04.456 ****** 2026-01-10 15:01:50.856920 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.856927 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:50.856934 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:50.856940 | orchestrator | 2026-01-10 15:01:50.856947 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:01:50.856954 | orchestrator | Saturday 10 January 2026 15:01:39 +0000 (0:00:00.599) 0:00:05.056 ****** 2026-01-10 15:01:50.856961 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.856967 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:50.856974 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:50.856981 | orchestrator | 2026-01-10 15:01:50.856988 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-10 15:01:50.856994 | orchestrator | Saturday 10 January 2026 15:01:39 +0000 (0:00:00.332) 0:00:05.389 ****** 2026-01-10 15:01:50.857001 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857008 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:01:50.857015 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:01:50.857022 | orchestrator | 2026-01-10 15:01:50.857028 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-10 15:01:50.857035 | orchestrator | Saturday 10 January 2026 15:01:40 +0000 (0:00:00.302) 0:00:05.691 ****** 2026-01-10 15:01:50.857042 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.857048 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:01:50.857055 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:01:50.857062 | orchestrator | 2026-01-10 15:01:50.857068 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:01:50.857075 | orchestrator | Saturday 10 January 2026 15:01:40 +0000 (0:00:00.606) 0:00:06.298 ****** 2026-01-10 15:01:50.857082 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857089 | orchestrator | 2026-01-10 15:01:50.857095 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:01:50.857102 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.268) 0:00:06.566 ****** 2026-01-10 15:01:50.857109 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857115 | orchestrator | 2026-01-10 15:01:50.857122 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:01:50.857129 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.286) 0:00:06.853 ****** 2026-01-10 15:01:50.857135 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857142 | orchestrator | 2026-01-10 15:01:50.857148 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857155 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.256) 0:00:07.110 ****** 2026-01-10 15:01:50.857161 | orchestrator | 2026-01-10 15:01:50.857168 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857174 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.081) 0:00:07.192 ****** 2026-01-10 15:01:50.857181 | orchestrator | 2026-01-10 15:01:50.857187 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857278 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.074) 0:00:07.266 ****** 2026-01-10 15:01:50.857288 | orchestrator | 2026-01-10 15:01:50.857294 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:01:50.857301 | orchestrator | Saturday 10 January 2026 15:01:41 +0000 (0:00:00.075) 0:00:07.342 ****** 2026-01-10 15:01:50.857307 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857314 | orchestrator | 2026-01-10 15:01:50.857321 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:01:50.857348 | orchestrator | Saturday 10 January 2026 15:01:42 +0000 (0:00:00.272) 0:00:07.614 ****** 2026-01-10 15:01:50.857354 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857360 | orchestrator | 2026-01-10 15:01:50.857385 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-10 15:01:50.857393 | orchestrator | Saturday 10 January 2026 15:01:42 +0000 (0:00:00.254) 0:00:07.869 ****** 2026-01-10 15:01:50.857400 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.857407 | orchestrator | 2026-01-10 15:01:50.857414 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-10 15:01:50.857421 | orchestrator | Saturday 10 January 2026 15:01:42 +0000 (0:00:00.118) 0:00:07.987 ****** 2026-01-10 15:01:50.857427 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:01:50.857433 | orchestrator | 2026-01-10 15:01:50.857440 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-10 15:01:50.857447 | orchestrator | Saturday 10 January 2026 15:01:44 +0000 (0:00:02.265) 0:00:10.253 ****** 2026-01-10 15:01:50.857454 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.857460 | orchestrator | 2026-01-10 15:01:50.857467 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-10 15:01:50.857474 | orchestrator | Saturday 10 January 2026 15:01:45 +0000 (0:00:00.481) 0:00:10.734 ****** 2026-01-10 15:01:50.857480 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.857487 | orchestrator | 2026-01-10 15:01:50.857494 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-10 15:01:50.857500 | orchestrator | Saturday 10 January 2026 15:01:45 +0000 (0:00:00.349) 0:00:11.084 ****** 2026-01-10 15:01:50.857507 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857514 | orchestrator | 2026-01-10 15:01:50.857521 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-10 15:01:50.857528 | orchestrator | Saturday 10 January 2026 15:01:45 +0000 (0:00:00.141) 0:00:11.225 ****** 2026-01-10 15:01:50.857534 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:01:50.857541 | orchestrator | 2026-01-10 15:01:50.857548 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:01:50.857554 | orchestrator | Saturday 10 January 2026 15:01:45 +0000 (0:00:00.156) 0:00:11.382 ****** 2026-01-10 15:01:50.857561 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.857568 | orchestrator | 2026-01-10 15:01:50.857574 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:01:50.857581 | orchestrator | Saturday 10 January 2026 15:01:46 +0000 (0:00:00.294) 0:00:11.676 ****** 2026-01-10 15:01:50.857588 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:01:50.857595 | orchestrator | 2026-01-10 15:01:50.857602 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:01:50.857608 | orchestrator | Saturday 10 January 2026 15:01:46 +0000 (0:00:00.298) 0:00:11.975 ****** 2026-01-10 15:01:50.857623 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.857630 | orchestrator | 2026-01-10 15:01:50.857636 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:01:50.857641 | orchestrator | Saturday 10 January 2026 15:01:47 +0000 (0:00:01.496) 0:00:13.471 ****** 2026-01-10 15:01:50.857647 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.857653 | orchestrator | 2026-01-10 15:01:50.857660 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:01:50.857673 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:00.329) 0:00:13.800 ****** 2026-01-10 15:01:50.857680 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.857687 | orchestrator | 2026-01-10 15:01:50.857693 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857699 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:00.255) 0:00:14.056 ****** 2026-01-10 15:01:50.857706 | orchestrator | 2026-01-10 15:01:50.857712 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857719 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:00.071) 0:00:14.127 ****** 2026-01-10 15:01:50.857725 | orchestrator | 2026-01-10 15:01:50.857732 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:01:50.857739 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:00.076) 0:00:14.204 ****** 2026-01-10 15:01:50.857745 | orchestrator | 2026-01-10 15:01:50.857752 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:01:50.857759 | orchestrator | Saturday 10 January 2026 15:01:48 +0000 (0:00:00.295) 0:00:14.499 ****** 2026-01-10 15:01:50.857765 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:01:50.857772 | orchestrator | 2026-01-10 15:01:50.857779 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:01:50.857786 | orchestrator | Saturday 10 January 2026 15:01:50 +0000 (0:00:01.463) 0:00:15.962 ****** 2026-01-10 15:01:50.857792 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:01:50.857799 | orchestrator |  "msg": [ 2026-01-10 15:01:50.857806 | orchestrator |  "Validator run completed.", 2026-01-10 15:01:50.857813 | orchestrator |  "You can find the report file here:", 2026-01-10 15:01:50.857820 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-10T15:01:35+00:00-report.json", 2026-01-10 15:01:50.857828 | orchestrator |  "on the following host:", 2026-01-10 15:01:50.857835 | orchestrator |  "testbed-manager" 2026-01-10 15:01:50.857842 | orchestrator |  ] 2026-01-10 15:01:50.857850 | orchestrator | } 2026-01-10 15:01:50.857857 | orchestrator | 2026-01-10 15:01:50.857864 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:01:50.857873 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:01:50.857881 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:01:50.857894 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:01:51.271881 | orchestrator | 2026-01-10 15:01:51.271966 | orchestrator | 2026-01-10 15:01:51.271976 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:01:51.271985 | orchestrator | Saturday 10 January 2026 15:01:50 +0000 (0:00:00.436) 0:00:16.398 ****** 2026-01-10 15:01:51.271993 | orchestrator | =============================================================================== 2026-01-10 15:01:51.272000 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.27s 2026-01-10 15:01:51.272007 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2026-01-10 15:01:51.272014 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-01-10 15:01:51.272021 | orchestrator | Get container info ------------------------------------------------------ 1.18s 2026-01-10 15:01:51.272027 | orchestrator | Create report output directory ------------------------------------------ 1.11s 2026-01-10 15:01:51.272034 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-01-10 15:01:51.272041 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.61s 2026-01-10 15:01:51.272068 | orchestrator | Set test result to passed if container is existing ---------------------- 0.60s 2026-01-10 15:01:51.272075 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.48s 2026-01-10 15:01:51.272082 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-01-10 15:01:51.272089 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-01-10 15:01:51.272095 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-01-10 15:01:51.272102 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-01-10 15:01:51.272108 | orchestrator | Aggregate test results step two ----------------------------------------- 0.33s 2026-01-10 15:01:51.272115 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-01-10 15:01:51.272134 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-01-10 15:01:51.272141 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-01-10 15:01:51.272148 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2026-01-10 15:01:51.272155 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-01-10 15:01:51.272162 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-01-10 15:01:51.683990 | orchestrator | + osism validate ceph-osds 2026-01-10 15:02:13.602370 | orchestrator | 2026-01-10 15:02:13.602507 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-10 15:02:13.602517 | orchestrator | 2026-01-10 15:02:13.602525 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:02:13.602533 | orchestrator | Saturday 10 January 2026 15:02:08 +0000 (0:00:00.435) 0:00:00.435 ****** 2026-01-10 15:02:13.602541 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:13.602548 | orchestrator | 2026-01-10 15:02:13.602556 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 15:02:13.602563 | orchestrator | Saturday 10 January 2026 15:02:09 +0000 (0:00:00.894) 0:00:01.330 ****** 2026-01-10 15:02:13.602570 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:13.602577 | orchestrator | 2026-01-10 15:02:13.602584 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:02:13.602591 | orchestrator | Saturday 10 January 2026 15:02:10 +0000 (0:00:00.554) 0:00:01.884 ****** 2026-01-10 15:02:13.602598 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:13.602605 | orchestrator | 2026-01-10 15:02:13.602612 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:02:13.602619 | orchestrator | Saturday 10 January 2026 15:02:11 +0000 (0:00:00.785) 0:00:02.670 ****** 2026-01-10 15:02:13.602626 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:13.602634 | orchestrator | 2026-01-10 15:02:13.602641 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:02:13.602648 | orchestrator | Saturday 10 January 2026 15:02:11 +0000 (0:00:00.124) 0:00:02.794 ****** 2026-01-10 15:02:13.602655 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:13.602662 | orchestrator | 2026-01-10 15:02:13.602669 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:02:13.602675 | orchestrator | Saturday 10 January 2026 15:02:11 +0000 (0:00:00.133) 0:00:02.928 ****** 2026-01-10 15:02:13.602682 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:13.602689 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:13.602696 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:13.602703 | orchestrator | 2026-01-10 15:02:13.602709 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:02:13.602716 | orchestrator | Saturday 10 January 2026 15:02:11 +0000 (0:00:00.324) 0:00:03.252 ****** 2026-01-10 15:02:13.602723 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:13.602730 | orchestrator | 2026-01-10 15:02:13.602736 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:02:13.602766 | orchestrator | Saturday 10 January 2026 15:02:11 +0000 (0:00:00.149) 0:00:03.402 ****** 2026-01-10 15:02:13.602773 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:13.602780 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:13.602786 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:13.602793 | orchestrator | 2026-01-10 15:02:13.602800 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-10 15:02:13.602807 | orchestrator | Saturday 10 January 2026 15:02:12 +0000 (0:00:00.346) 0:00:03.748 ****** 2026-01-10 15:02:13.602814 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:13.602820 | orchestrator | 2026-01-10 15:02:13.602827 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:02:13.602834 | orchestrator | Saturday 10 January 2026 15:02:12 +0000 (0:00:00.681) 0:00:04.430 ****** 2026-01-10 15:02:13.602841 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:13.602848 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:13.602855 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:13.602861 | orchestrator | 2026-01-10 15:02:13.602868 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-10 15:02:13.602875 | orchestrator | Saturday 10 January 2026 15:02:13 +0000 (0:00:00.549) 0:00:04.980 ****** 2026-01-10 15:02:13.602899 | orchestrator | skipping: [testbed-node-3] => (item={'id': '82ac2afceaef9331ce27a7cb5c24fc05daf1689c26a6aed4cdb851609bf06b0d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-10 15:02:13.602909 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c1b2a443d7e1b29dbcffd9440a5589ef3d8a2266ccd5d944d8cba8b072a7ac6', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.602918 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f5e723007cafd21a47e3e9682e3ebadd1d9c82bc828115376ec16ac8b7a0ec4e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.602926 | orchestrator | skipping: [testbed-node-3] => (item={'id': '58d21c4d1bf962e29a450962ec5a8b59163572535c7845deb74edfba3937eb3e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:02:13.602944 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5e649d86806fe36fb7f9105b3bbd5e15475ee49aac5f569a67c4120ec4dd3cc6', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:02:13.602966 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5df5bd2fb90c3b0bbcc4fbdd8481ed97e8a840a78cca88a66d6fa982b83dc438', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:02:13.602973 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05503a5e758d8b141a3ca6706bda4a47972c3832978691c5f9f5ab80dd89df44', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:02:13.602979 | orchestrator | skipping: [testbed-node-3] => (item={'id': '37bdb2115819c4dbb5711b9844d6380fbd770312b9e8e46f6e25e39e371adb30', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:02:13.602986 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd6631c0f0c8eaea3062bcbecf7363759db561fba9bc4101bf4234ea69cd5fedc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.602993 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ce0707fe9ce117ccde096f3d46a959237e721ba0a757dd0165bde4244aa58fc4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.603005 | orchestrator | ok: [testbed-node-3] => (item={'id': '14c2dd5cafcf60f6d37df051ffdee6dbd55267f8aeba0d2595ed59c0ef0c9fac', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:02:13.603012 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ba978dafb6261a0ad0d2db64423122dd07d4ee0904c4314ad474a2973977b673', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:02:13.603019 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'daae8bf974f9e625b2c1a302a90ca2718b832d048e3104f63c5f2d7d8b527035', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-10 15:02:13.603025 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2249f32d93396da13a91ce06f53ed4290bc376eb0e27e45e179ffb6468c22552', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.603034 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10b0a99110ea4794d48b1b5b262f85b387c70a0fc93d9daaa6f49a46aeeceeab', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.603042 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1c8118b32a071d43166068cd9fb85eb654806cf0e4c3763b8c3e863ed2f9e050', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.603048 | orchestrator | skipping: [testbed-node-3] => (item={'id': '506c8175613f07428f606a8aa234d48dd4a4eaff67efdea882b666b0988a2724', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.603055 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b4f3abb08baf0fa7f9fd1525bdec80b1b354f706db3e9eae9ba10fbd6eb08b30', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:02:13.603062 | orchestrator | skipping: [testbed-node-4] => (item={'id': '864106dc9d9b0b65a5fb7832d03e131620d23663a4fc26a02eebbe16b4e8c189', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-10 15:02:13.603068 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d309c0d57873dbd59ca6f637fc1a4a76ccc552eb7a6c56cc2a416f5963659de', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.603087 | orchestrator | skipping: [testbed-node-4] => (item={'id': '343dabe35698f99493558d086202782a126160695faf61930a62ffaa28d010b5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.603099 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2118036107a7e69700b12708c199501f1d844227408b2bc677c67bcc1807381c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:02:13.854632 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79b3c1b18aee3b694003c6d8d9be437eeeafa719a5b9530dbc6c4a52fe93be76', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:02:13.854718 | orchestrator | skipping: [testbed-node-4] => (item={'id': '85fefc560fedaa50e54a2b5e05197cd4b276cb1ce6df9a5cc99699cf96330931', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:02:13.854746 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46c1e8d406723ba78129a14d5d79808a7c42dcafe5072bc5d7fcf1ac6745aad7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:02:13.854753 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a8a04c789a91bfdb0919c524389e923d5b6388c6cb3bafb29b47ead6ca547147', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:02:13.854760 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e69ab1825c0c7711a20212c087ea05ff50bf6cfe55eea40b6815a3360829d3e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.854768 | orchestrator | skipping: [testbed-node-4] => (item={'id': '474331339fbd8b6ebda34879f9815c80ecefb5fa1c0d1a6218feb892b7d3b50f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.854777 | orchestrator | ok: [testbed-node-4] => (item={'id': 'bc249b7318ecf894e6c426a5e31306dac6918e610f21a17dfcf8f32f6cd6dae9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:02:13.854784 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a4a973ff6279b78c34913b6828e4be250bc3de4f581c6128a5fe8a40f5f275f6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:02:13.854791 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e212fadf2000fe9e5cfac2ee56d04387fa602c60d85cf65d3bc1337e49871969', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-10 15:02:13.854798 | orchestrator | skipping: [testbed-node-4] => (item={'id': '035b1c44268a44e131b9fe0c539f761414328b7e2228e40b74830688876adc04', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.854805 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9bf12014081e153f0ce184a5e5ee93a6ee4cd9198640393119ba5801b0858179', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.854813 | orchestrator | skipping: [testbed-node-4] => (item={'id': '582c1b0d2e74e7aa98115ea40e2241259f4624863d10980e8384a4a36120aee6', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.854820 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ff4e072ceb3cc025c8bb6482a29fffbdc423a0a5eca8b352811efd7d1442d740', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.854841 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'baa5cdc47040f1481be9edd9948d308818124747005530a6c2d649f3e0f7fa84', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:02:13.854848 | orchestrator | skipping: [testbed-node-5] => (item={'id': '131c9f2d66725416eb664183c1b17a092457e3bf9febae53b714b11e05c511cf', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-10 15:02:13.854868 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f4a875340786f2098ab53d6eda4d8c95ab1e111fbe769df8c2a64e80bcc2f2ef', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.854881 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5d29f97faccf6343353982323c442987060a703d6a4686d6c258f9097917d9fe', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:02:13.854885 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f2decda46362a97998cb03f0baca112b6d36e07e07115837f963d958154cfc3', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-10 15:02:13.854891 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5bad3606fea21dc095937981bd8f400d8f37438d8c93e8bf91450f47de4e096b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-10 15:02:13.854897 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4bc073f19a9e2dd530ca63635ad0254e47f774c24a5d2fa66deb18ece9207ebe', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:02:13.854903 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5a0e75dcefdab1f21d0877a7e986e0cf65373f22b0303b82467d4c81216870ed', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-10 15:02:13.854909 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2b3c449b3c2c227f8d30177805ef18e7b3f25b8f8f6115eb5532c7711d84d95', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:02:13.854915 | orchestrator | skipping: [testbed-node-5] => (item={'id': '77321ae8e525cefb01b5187183735acd7fd6fc9c3ccd619512f7b1751ba162d6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.854921 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f6232e9c784a531a15d4e0c12d7a6f50494994056cfa85e2285be35e2ec45557', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:02:13.854927 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ce3f73d6c343daece11d7025438d91d5043515d2b7e831cb38fc00c442ddab41', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:02:13.854933 | orchestrator | ok: [testbed-node-5] => (item={'id': '678aec9c5bc1ddebd26328e4550c84d56991d90947fbffe9d43e33f51de4ad42', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-10 15:02:13.854939 | orchestrator | skipping: [testbed-node-5] => (item={'id': '68b82f0c089f06c78909d7a80539e3a0ce81217a7e84b0244715c26eedeadc79', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-10 15:02:13.854944 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fe0732ce17511076bf62bf1d63dc527ef44cdc8e685090760385c995024b910d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.854951 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a2b6c8400f1031b40ee580fae255ec0fddd4a5a3d89c38371b2880095f2f846', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:02:13.854961 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8dce186617507273c49195c94b46bb479b2727e6fc622895d52f844e623ad848', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.854973 | orchestrator | skipping: [testbed-node-5] => (item={'id': '39aaa1746a04910c17959ce7af6309de7f5a6adff1ce3c13fb1f49051d1ed982', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:02:13.854984 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b01f43c0ee71a5e999da06e9d24927e71bab435547a98706d02fd70cf9c1740d', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-10 15:02:28.818778 | orchestrator | 2026-01-10 15:02:28.818853 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-10 15:02:28.818861 | orchestrator | Saturday 10 January 2026 15:02:13 +0000 (0:00:00.518) 0:00:05.498 ****** 2026-01-10 15:02:28.818865 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.818870 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.818874 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.818878 | orchestrator | 2026-01-10 15:02:28.818882 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-10 15:02:28.818886 | orchestrator | Saturday 10 January 2026 15:02:14 +0000 (0:00:00.323) 0:00:05.821 ****** 2026-01-10 15:02:28.818890 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.818894 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.818898 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.818902 | orchestrator | 2026-01-10 15:02:28.818906 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-10 15:02:28.818910 | orchestrator | Saturday 10 January 2026 15:02:14 +0000 (0:00:00.548) 0:00:06.370 ****** 2026-01-10 15:02:28.818914 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.818918 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.818922 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.818926 | orchestrator | 2026-01-10 15:02:28.818929 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:02:28.818933 | orchestrator | Saturday 10 January 2026 15:02:15 +0000 (0:00:00.344) 0:00:06.715 ****** 2026-01-10 15:02:28.818937 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.818941 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.818945 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.818949 | orchestrator | 2026-01-10 15:02:28.818953 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-10 15:02:28.818957 | orchestrator | Saturday 10 January 2026 15:02:15 +0000 (0:00:00.331) 0:00:07.047 ****** 2026-01-10 15:02:28.818961 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-10 15:02:28.818966 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-10 15:02:28.818970 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.818973 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-10 15:02:28.818977 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-10 15:02:28.818981 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.818985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-10 15:02:28.818989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-10 15:02:28.818993 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.818996 | orchestrator | 2026-01-10 15:02:28.819000 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-10 15:02:28.819004 | orchestrator | Saturday 10 January 2026 15:02:15 +0000 (0:00:00.334) 0:00:07.382 ****** 2026-01-10 15:02:28.819008 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819012 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819032 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819036 | orchestrator | 2026-01-10 15:02:28.819040 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:02:28.819044 | orchestrator | Saturday 10 January 2026 15:02:16 +0000 (0:00:00.586) 0:00:07.968 ****** 2026-01-10 15:02:28.819048 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819052 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.819056 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.819059 | orchestrator | 2026-01-10 15:02:28.819063 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:02:28.819067 | orchestrator | Saturday 10 January 2026 15:02:16 +0000 (0:00:00.303) 0:00:08.272 ****** 2026-01-10 15:02:28.819071 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819075 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.819078 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.819082 | orchestrator | 2026-01-10 15:02:28.819086 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-10 15:02:28.819090 | orchestrator | Saturday 10 January 2026 15:02:16 +0000 (0:00:00.304) 0:00:08.576 ****** 2026-01-10 15:02:28.819094 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819098 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819102 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819106 | orchestrator | 2026-01-10 15:02:28.819109 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:02:28.819113 | orchestrator | Saturday 10 January 2026 15:02:17 +0000 (0:00:00.329) 0:00:08.905 ****** 2026-01-10 15:02:28.819117 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819121 | orchestrator | 2026-01-10 15:02:28.819125 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:02:28.819129 | orchestrator | Saturday 10 January 2026 15:02:17 +0000 (0:00:00.543) 0:00:09.448 ****** 2026-01-10 15:02:28.819133 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819136 | orchestrator | 2026-01-10 15:02:28.819140 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:02:28.819144 | orchestrator | Saturday 10 January 2026 15:02:18 +0000 (0:00:00.717) 0:00:10.166 ****** 2026-01-10 15:02:28.819148 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819152 | orchestrator | 2026-01-10 15:02:28.819156 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:28.819159 | orchestrator | Saturday 10 January 2026 15:02:18 +0000 (0:00:00.278) 0:00:10.445 ****** 2026-01-10 15:02:28.819163 | orchestrator | 2026-01-10 15:02:28.819167 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:28.819171 | orchestrator | Saturday 10 January 2026 15:02:18 +0000 (0:00:00.080) 0:00:10.525 ****** 2026-01-10 15:02:28.819175 | orchestrator | 2026-01-10 15:02:28.819179 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:28.819192 | orchestrator | Saturday 10 January 2026 15:02:18 +0000 (0:00:00.071) 0:00:10.597 ****** 2026-01-10 15:02:28.819196 | orchestrator | 2026-01-10 15:02:28.819200 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:02:28.819204 | orchestrator | Saturday 10 January 2026 15:02:19 +0000 (0:00:00.070) 0:00:10.668 ****** 2026-01-10 15:02:28.819207 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819211 | orchestrator | 2026-01-10 15:02:28.819215 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-10 15:02:28.819219 | orchestrator | Saturday 10 January 2026 15:02:19 +0000 (0:00:00.257) 0:00:10.925 ****** 2026-01-10 15:02:28.819223 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819226 | orchestrator | 2026-01-10 15:02:28.819230 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:02:28.819234 | orchestrator | Saturday 10 January 2026 15:02:19 +0000 (0:00:00.270) 0:00:11.196 ****** 2026-01-10 15:02:28.819238 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819242 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819282 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819287 | orchestrator | 2026-01-10 15:02:28.819291 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-10 15:02:28.819294 | orchestrator | Saturday 10 January 2026 15:02:19 +0000 (0:00:00.291) 0:00:11.487 ****** 2026-01-10 15:02:28.819298 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819302 | orchestrator | 2026-01-10 15:02:28.819306 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-10 15:02:28.819310 | orchestrator | Saturday 10 January 2026 15:02:20 +0000 (0:00:00.251) 0:00:11.739 ****** 2026-01-10 15:02:28.819314 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 15:02:28.819318 | orchestrator | 2026-01-10 15:02:28.819322 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-10 15:02:28.819325 | orchestrator | Saturday 10 January 2026 15:02:22 +0000 (0:00:02.388) 0:00:14.127 ****** 2026-01-10 15:02:28.819330 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819334 | orchestrator | 2026-01-10 15:02:28.819339 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-10 15:02:28.819343 | orchestrator | Saturday 10 January 2026 15:02:22 +0000 (0:00:00.136) 0:00:14.264 ****** 2026-01-10 15:02:28.819347 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819351 | orchestrator | 2026-01-10 15:02:28.819356 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-10 15:02:28.819360 | orchestrator | Saturday 10 January 2026 15:02:22 +0000 (0:00:00.341) 0:00:14.606 ****** 2026-01-10 15:02:28.819365 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819369 | orchestrator | 2026-01-10 15:02:28.819373 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-10 15:02:28.819377 | orchestrator | Saturday 10 January 2026 15:02:23 +0000 (0:00:00.140) 0:00:14.746 ****** 2026-01-10 15:02:28.819382 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819386 | orchestrator | 2026-01-10 15:02:28.819390 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:02:28.819395 | orchestrator | Saturday 10 January 2026 15:02:23 +0000 (0:00:00.141) 0:00:14.888 ****** 2026-01-10 15:02:28.819399 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819403 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819408 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819491 | orchestrator | 2026-01-10 15:02:28.819499 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-10 15:02:28.819504 | orchestrator | Saturday 10 January 2026 15:02:23 +0000 (0:00:00.297) 0:00:15.186 ****** 2026-01-10 15:02:28.819508 | orchestrator | changed: [testbed-node-3] 2026-01-10 15:02:28.819513 | orchestrator | changed: [testbed-node-4] 2026-01-10 15:02:28.819517 | orchestrator | changed: [testbed-node-5] 2026-01-10 15:02:28.819522 | orchestrator | 2026-01-10 15:02:28.819526 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-10 15:02:28.819530 | orchestrator | Saturday 10 January 2026 15:02:26 +0000 (0:00:02.546) 0:00:17.732 ****** 2026-01-10 15:02:28.819534 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819539 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819543 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819547 | orchestrator | 2026-01-10 15:02:28.819551 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-10 15:02:28.819555 | orchestrator | Saturday 10 January 2026 15:02:26 +0000 (0:00:00.647) 0:00:18.380 ****** 2026-01-10 15:02:28.819560 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819564 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819568 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819573 | orchestrator | 2026-01-10 15:02:28.819577 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-10 15:02:28.819581 | orchestrator | Saturday 10 January 2026 15:02:27 +0000 (0:00:00.566) 0:00:18.947 ****** 2026-01-10 15:02:28.819586 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819590 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.819599 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.819603 | orchestrator | 2026-01-10 15:02:28.819607 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-10 15:02:28.819616 | orchestrator | Saturday 10 January 2026 15:02:27 +0000 (0:00:00.353) 0:00:19.300 ****** 2026-01-10 15:02:28.819620 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:28.819624 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:28.819628 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:28.819633 | orchestrator | 2026-01-10 15:02:28.819637 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-10 15:02:28.819641 | orchestrator | Saturday 10 January 2026 15:02:28 +0000 (0:00:00.556) 0:00:19.856 ****** 2026-01-10 15:02:28.819645 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819650 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.819654 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.819658 | orchestrator | 2026-01-10 15:02:28.819663 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-10 15:02:28.819667 | orchestrator | Saturday 10 January 2026 15:02:28 +0000 (0:00:00.315) 0:00:20.172 ****** 2026-01-10 15:02:28.819672 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:28.819676 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:28.819680 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:28.819684 | orchestrator | 2026-01-10 15:02:28.819693 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:02:37.415738 | orchestrator | Saturday 10 January 2026 15:02:28 +0000 (0:00:00.296) 0:00:20.468 ****** 2026-01-10 15:02:37.415842 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:37.415850 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:37.415855 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:37.415859 | orchestrator | 2026-01-10 15:02:37.415863 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-10 15:02:37.415868 | orchestrator | Saturday 10 January 2026 15:02:29 +0000 (0:00:00.558) 0:00:21.026 ****** 2026-01-10 15:02:37.415872 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:37.415876 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:37.415880 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:37.415884 | orchestrator | 2026-01-10 15:02:37.415889 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-10 15:02:37.415893 | orchestrator | Saturday 10 January 2026 15:02:30 +0000 (0:00:01.013) 0:00:22.040 ****** 2026-01-10 15:02:37.415897 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:37.415901 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:37.415904 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:37.415908 | orchestrator | 2026-01-10 15:02:37.415912 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-10 15:02:37.415916 | orchestrator | Saturday 10 January 2026 15:02:30 +0000 (0:00:00.352) 0:00:22.393 ****** 2026-01-10 15:02:37.415920 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:37.415925 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:02:37.415928 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:02:37.415932 | orchestrator | 2026-01-10 15:02:37.415936 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-10 15:02:37.415940 | orchestrator | Saturday 10 January 2026 15:02:31 +0000 (0:00:00.326) 0:00:22.719 ****** 2026-01-10 15:02:37.415950 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:02:37.415955 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:02:37.415958 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:02:37.415962 | orchestrator | 2026-01-10 15:02:37.415966 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:02:37.415970 | orchestrator | Saturday 10 January 2026 15:02:31 +0000 (0:00:00.320) 0:00:23.039 ****** 2026-01-10 15:02:37.415974 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:37.415978 | orchestrator | 2026-01-10 15:02:37.415982 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:02:37.416008 | orchestrator | Saturday 10 January 2026 15:02:31 +0000 (0:00:00.289) 0:00:23.329 ****** 2026-01-10 15:02:37.416012 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:02:37.416016 | orchestrator | 2026-01-10 15:02:37.416020 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:02:37.416024 | orchestrator | Saturday 10 January 2026 15:02:32 +0000 (0:00:00.810) 0:00:24.140 ****** 2026-01-10 15:02:37.416028 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:37.416031 | orchestrator | 2026-01-10 15:02:37.416035 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:02:37.416040 | orchestrator | Saturday 10 January 2026 15:02:34 +0000 (0:00:01.750) 0:00:25.890 ****** 2026-01-10 15:02:37.416046 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:37.416052 | orchestrator | 2026-01-10 15:02:37.416058 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:02:37.416064 | orchestrator | Saturday 10 January 2026 15:02:34 +0000 (0:00:00.335) 0:00:26.226 ****** 2026-01-10 15:02:37.416070 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:37.416077 | orchestrator | 2026-01-10 15:02:37.416083 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:37.416089 | orchestrator | Saturday 10 January 2026 15:02:34 +0000 (0:00:00.262) 0:00:26.488 ****** 2026-01-10 15:02:37.416096 | orchestrator | 2026-01-10 15:02:37.416100 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:37.416104 | orchestrator | Saturday 10 January 2026 15:02:34 +0000 (0:00:00.072) 0:00:26.561 ****** 2026-01-10 15:02:37.416108 | orchestrator | 2026-01-10 15:02:37.416111 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:02:37.416115 | orchestrator | Saturday 10 January 2026 15:02:34 +0000 (0:00:00.068) 0:00:26.630 ****** 2026-01-10 15:02:37.416120 | orchestrator | 2026-01-10 15:02:37.416123 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:02:37.416127 | orchestrator | Saturday 10 January 2026 15:02:35 +0000 (0:00:00.087) 0:00:26.717 ****** 2026-01-10 15:02:37.416131 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:02:37.416135 | orchestrator | 2026-01-10 15:02:37.416139 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:02:37.416142 | orchestrator | Saturday 10 January 2026 15:02:36 +0000 (0:00:01.438) 0:00:28.155 ****** 2026-01-10 15:02:37.416146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:02:37.416150 | orchestrator |  "msg": [ 2026-01-10 15:02:37.416165 | orchestrator |  "Validator run completed.", 2026-01-10 15:02:37.416169 | orchestrator |  "You can find the report file here:", 2026-01-10 15:02:37.416173 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-10T15:02:09+00:00-report.json", 2026-01-10 15:02:37.416178 | orchestrator |  "on the following host:", 2026-01-10 15:02:37.416182 | orchestrator |  "testbed-manager" 2026-01-10 15:02:37.416186 | orchestrator |  ] 2026-01-10 15:02:37.416191 | orchestrator | } 2026-01-10 15:02:37.416195 | orchestrator | 2026-01-10 15:02:37.416199 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:02:37.416203 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 15:02:37.416209 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:02:37.416224 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:02:37.416228 | orchestrator | 2026-01-10 15:02:37.416232 | orchestrator | 2026-01-10 15:02:37.416236 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:02:37.416251 | orchestrator | Saturday 10 January 2026 15:02:36 +0000 (0:00:00.406) 0:00:28.562 ****** 2026-01-10 15:02:37.416256 | orchestrator | =============================================================================== 2026-01-10 15:02:37.416260 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.55s 2026-01-10 15:02:37.416264 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.39s 2026-01-10 15:02:37.416269 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-01-10 15:02:37.416273 | orchestrator | Write report file ------------------------------------------------------- 1.44s 2026-01-10 15:02:37.416277 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.01s 2026-01-10 15:02:37.416281 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-01-10 15:02:37.416286 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.81s 2026-01-10 15:02:37.416290 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2026-01-10 15:02:37.416295 | orchestrator | Aggregate test results step two ----------------------------------------- 0.72s 2026-01-10 15:02:37.416299 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.68s 2026-01-10 15:02:37.416303 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.65s 2026-01-10 15:02:37.416308 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.59s 2026-01-10 15:02:37.416312 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.57s 2026-01-10 15:02:37.416316 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-01-10 15:02:37.416320 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.56s 2026-01-10 15:02:37.416325 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.55s 2026-01-10 15:02:37.416329 | orchestrator | Prepare test data ------------------------------------------------------- 0.55s 2026-01-10 15:02:37.416334 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.55s 2026-01-10 15:02:37.416338 | orchestrator | Aggregate test results step one ----------------------------------------- 0.54s 2026-01-10 15:02:37.416342 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2026-01-10 15:02:37.807333 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-10 15:02:37.818086 | orchestrator | + set -e 2026-01-10 15:02:37.818176 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:02:37.818187 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:02:37.818317 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:02:37.818327 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:02:37.818331 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:02:37.818335 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:02:37.818340 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:02:37.818345 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:02:37.818352 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:02:37.818358 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-10 15:02:37.818365 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-10 15:02:37.818372 | orchestrator | ++ export ARA=false 2026-01-10 15:02:37.818377 | orchestrator | ++ ARA=false 2026-01-10 15:02:37.818381 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:02:37.818385 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:02:37.818389 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:02:37.818393 | orchestrator | ++ TEMPEST=false 2026-01-10 15:02:37.818399 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:02:37.818405 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:02:37.818411 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 15:02:37.818418 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-10 15:02:37.818425 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:02:37.818431 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:02:37.818454 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:02:37.818459 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:02:37.818465 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:02:37.818471 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:02:37.818546 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:02:37.818556 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:02:37.818563 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 15:02:37.818570 | orchestrator | + source /etc/os-release 2026-01-10 15:02:37.818577 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-10 15:02:37.818581 | orchestrator | ++ NAME=Ubuntu 2026-01-10 15:02:37.818585 | orchestrator | ++ VERSION_ID=24.04 2026-01-10 15:02:37.818601 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-10 15:02:37.818605 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-10 15:02:37.818610 | orchestrator | ++ ID=ubuntu 2026-01-10 15:02:37.818614 | orchestrator | ++ ID_LIKE=debian 2026-01-10 15:02:37.818617 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-10 15:02:37.818622 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-10 15:02:37.818629 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-10 15:02:37.819155 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-10 15:02:37.819185 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-10 15:02:37.819191 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-10 15:02:37.819197 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-10 15:02:37.819243 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-10 15:02:37.819252 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:02:37.851459 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:03:03.823061 | orchestrator | 2026-01-10 15:03:03.823179 | orchestrator | # Status of Elasticsearch 2026-01-10 15:03:03.823192 | orchestrator | 2026-01-10 15:03:03.823199 | orchestrator | + pushd /opt/configuration/contrib 2026-01-10 15:03:03.823207 | orchestrator | + echo 2026-01-10 15:03:03.823213 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-10 15:03:03.823219 | orchestrator | + echo 2026-01-10 15:03:03.823236 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-10 15:03:04.014404 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-10 15:03:04.015133 | orchestrator | 2026-01-10 15:03:04.015184 | orchestrator | # Status of MariaDB 2026-01-10 15:03:04.015191 | orchestrator | 2026-01-10 15:03:04.015196 | orchestrator | + echo 2026-01-10 15:03:04.015201 | orchestrator | + echo '# Status of MariaDB' 2026-01-10 15:03:04.015205 | orchestrator | + echo 2026-01-10 15:03:04.016204 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 15:03:04.088286 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:03:04.088372 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:03:04.088382 | orchestrator | + osism status database 2026-01-10 15:03:06.197071 | orchestrator | 2026-01-10 15:03:06 | ERROR  | Unable to get ansible vault password 2026-01-10 15:03:06.197143 | orchestrator | 2026-01-10 15:03:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-10 15:03:06.197151 | orchestrator | 2026-01-10 15:03:06 | ERROR  | Dropping encrypted entries 2026-01-10 15:03:06.228458 | orchestrator | 2026-01-10 15:03:06 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-01-10 15:03:06.239125 | orchestrator | 2026-01-10 15:03:06 | INFO  | Cluster Status: Primary 2026-01-10 15:03:06.239210 | orchestrator | 2026-01-10 15:03:06 | INFO  | Connected: ON 2026-01-10 15:03:06.239220 | orchestrator | 2026-01-10 15:03:06 | INFO  | Ready: ON 2026-01-10 15:03:06.239226 | orchestrator | 2026-01-10 15:03:06 | INFO  | Cluster Size: 3 2026-01-10 15:03:06.239233 | orchestrator | 2026-01-10 15:03:06 | INFO  | Local State: Synced 2026-01-10 15:03:06.239239 | orchestrator | 2026-01-10 15:03:06 | INFO  | Cluster State UUID: d7ec4459-ee31-11f0-a94d-779a01f37d46 2026-01-10 15:03:06.239257 | orchestrator | 2026-01-10 15:03:06 | INFO  | Cluster Members: 192.168.16.10:3306,192.168.16.11:3306,192.168.16.12:3306 2026-01-10 15:03:06.239295 | orchestrator | 2026-01-10 15:03:06 | INFO  | Galera Version: 26.4.24(ra6b53429) 2026-01-10 15:03:06.239302 | orchestrator | 2026-01-10 15:03:06 | INFO  | Local Node UUID: 0ef82579-ee32-11f0-b69a-57f8bde8b76f 2026-01-10 15:03:06.239537 | orchestrator | 2026-01-10 15:03:06 | INFO  | Flow Control Paused: 0.00% 2026-01-10 15:03:06.239551 | orchestrator | 2026-01-10 15:03:06 | INFO  | Recv Queue Avg: 0 2026-01-10 15:03:06.239555 | orchestrator | 2026-01-10 15:03:06 | INFO  | Send Queue Avg: 0.000626017 2026-01-10 15:03:06.239559 | orchestrator | 2026-01-10 15:03:06 | INFO  | Transactions: 5205 local commits, 7920 replicated, 136 received 2026-01-10 15:03:06.239850 | orchestrator | 2026-01-10 15:03:06 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-01-10 15:03:06.239873 | orchestrator | 2026-01-10 15:03:06 | INFO  | MariaDB Uptime: 24 minutes, 25 seconds 2026-01-10 15:03:06.239880 | orchestrator | 2026-01-10 15:03:06 | INFO  | Threads: 128 connected, 1 running 2026-01-10 15:03:06.239886 | orchestrator | 2026-01-10 15:03:06 | INFO  | Queries: 139784 total, 0 slow 2026-01-10 15:03:06.239893 | orchestrator | 2026-01-10 15:03:06 | INFO  | Aborted Connects: 45 2026-01-10 15:03:06.239900 | orchestrator | 2026-01-10 15:03:06 | INFO  | MariaDB Galera Cluster validation PASSED 2026-01-10 15:03:06.584894 | orchestrator | 2026-01-10 15:03:06.584978 | orchestrator | # Status of Prometheus 2026-01-10 15:03:06.584987 | orchestrator | 2026-01-10 15:03:06.584991 | orchestrator | + echo 2026-01-10 15:03:06.584996 | orchestrator | + echo '# Status of Prometheus' 2026-01-10 15:03:06.585000 | orchestrator | + echo 2026-01-10 15:03:06.585005 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-10 15:03:06.644271 | orchestrator | Unauthorized 2026-01-10 15:03:06.649096 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-10 15:03:06.711016 | orchestrator | Unauthorized 2026-01-10 15:03:06.714824 | orchestrator | 2026-01-10 15:03:06.714887 | orchestrator | # Status of RabbitMQ 2026-01-10 15:03:06.714892 | orchestrator | 2026-01-10 15:03:06.714897 | orchestrator | + echo 2026-01-10 15:03:06.714901 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-10 15:03:06.714906 | orchestrator | + echo 2026-01-10 15:03:06.715793 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 15:03:06.785173 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:03:06.785246 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:03:06.785254 | orchestrator | + osism status messaging 2026-01-10 15:03:29.060004 | orchestrator | 2026-01-10 15:03:29 | ERROR  | Unable to get ansible vault password 2026-01-10 15:03:29.060079 | orchestrator | 2026-01-10 15:03:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-10 15:03:29.060086 | orchestrator | 2026-01-10 15:03:29 | ERROR  | Dropping encrypted entries 2026-01-10 15:03:29.101987 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-01-10 15:03:29.160982 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-01-10 15:03:29.161129 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-01-10 15:03:29.161140 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-01-10 15:03:29.161145 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Cluster Size: 3 2026-01-10 15:03:29.161399 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.161910 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.161943 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-01-10 15:03:29.162265 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Connections: 206, Channels: 205, Queues: 173 2026-01-10 15:03:29.162283 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-01-10 15:03:29.162649 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Message Rates: 8.2/s publish, 8.2/s deliver 2026-01-10 15:03:29.162747 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-01-10 15:03:29.162949 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-10 15:03:29.163470 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] File Descriptors: 116/1024 2026-01-10 15:03:29.163531 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-0] Sockets: 69/832 2026-01-10 15:03:29.163886 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-01-10 15:03:29.225339 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-01-10 15:03:29.226087 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-01-10 15:03:29.226157 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-01-10 15:03:29.226170 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Cluster Size: 3 2026-01-10 15:03:29.226261 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.226278 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.226287 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-01-10 15:03:29.226298 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Connections: 206, Channels: 205, Queues: 173 2026-01-10 15:03:29.226317 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-01-10 15:03:29.226838 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Message Rates: 8.2/s publish, 8.2/s deliver 2026-01-10 15:03:29.226863 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Disk Free: 58.8 GB (limit: 0.0 GB) 2026-01-10 15:03:29.227139 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-10 15:03:29.228478 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] File Descriptors: 113/1024 2026-01-10 15:03:29.228522 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-1] Sockets: 65/832 2026-01-10 15:03:29.228530 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-01-10 15:03:29.290677 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-01-10 15:03:29.293619 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-01-10 15:03:29.293712 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-01-10 15:03:29.293727 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Cluster Size: 3 2026-01-10 15:03:29.293737 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.293800 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:03:29.293817 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-01-10 15:03:29.293829 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Connections: 206, Channels: 205, Queues: 173 2026-01-10 15:03:29.293840 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-01-10 15:03:29.293852 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Message Rates: 8.2/s publish, 8.2/s deliver 2026-01-10 15:03:29.293864 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-10 15:03:29.293875 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-10 15:03:29.293887 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] File Descriptors: 118/1024 2026-01-10 15:03:29.293900 | orchestrator | 2026-01-10 15:03:29 | INFO  | [testbed-node-2] Sockets: 72/832 2026-01-10 15:03:29.293913 | orchestrator | 2026-01-10 15:03:29 | INFO  | RabbitMQ Cluster validation PASSED 2026-01-10 15:03:29.652347 | orchestrator | 2026-01-10 15:03:29.652435 | orchestrator | # Status of Redis 2026-01-10 15:03:29.652445 | orchestrator | 2026-01-10 15:03:29.652453 | orchestrator | + echo 2026-01-10 15:03:29.652461 | orchestrator | + echo '# Status of Redis' 2026-01-10 15:03:29.652469 | orchestrator | + echo 2026-01-10 15:03:29.652478 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-10 15:03:29.658861 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002073s;;;0.000000;10.000000 2026-01-10 15:03:29.659576 | orchestrator | 2026-01-10 15:03:29.659608 | orchestrator | # Create backup of MariaDB database 2026-01-10 15:03:29.659615 | orchestrator | 2026-01-10 15:03:29.659620 | orchestrator | + popd 2026-01-10 15:03:29.659625 | orchestrator | + echo 2026-01-10 15:03:29.659630 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-10 15:03:29.659635 | orchestrator | + echo 2026-01-10 15:03:29.659641 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-10 15:03:31.862594 | orchestrator | 2026-01-10 15:03:31 | INFO  | Task ee95c8b4-7d5b-4a1d-b4c5-cd6f6f5aaee6 (mariadb_backup) was prepared for execution. 2026-01-10 15:03:31.862685 | orchestrator | 2026-01-10 15:03:31 | INFO  | It takes a moment until task ee95c8b4-7d5b-4a1d-b4c5-cd6f6f5aaee6 (mariadb_backup) has been started and output is visible here. 2026-01-10 15:05:42.586171 | orchestrator | 2026-01-10 15:05:42.586266 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 15:05:42.586273 | orchestrator | 2026-01-10 15:05:42.586278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 15:05:42.586283 | orchestrator | Saturday 10 January 2026 15:03:36 +0000 (0:00:00.177) 0:00:00.177 ****** 2026-01-10 15:05:42.586287 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:42.586292 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:42.586296 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:42.586300 | orchestrator | 2026-01-10 15:05:42.586304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 15:05:42.586308 | orchestrator | Saturday 10 January 2026 15:03:36 +0000 (0:00:00.378) 0:00:00.555 ****** 2026-01-10 15:05:42.586312 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 15:05:42.586317 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 15:05:42.586320 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 15:05:42.586324 | orchestrator | 2026-01-10 15:05:42.586328 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 15:05:42.586332 | orchestrator | 2026-01-10 15:05:42.586335 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 15:05:42.586356 | orchestrator | Saturday 10 January 2026 15:03:37 +0000 (0:00:00.661) 0:00:01.217 ****** 2026-01-10 15:05:42.586360 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 15:05:42.586364 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 15:05:42.586368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 15:05:42.586372 | orchestrator | 2026-01-10 15:05:42.586376 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 15:05:42.586380 | orchestrator | Saturday 10 January 2026 15:03:37 +0000 (0:00:00.422) 0:00:01.640 ****** 2026-01-10 15:05:42.586384 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 15:05:42.586389 | orchestrator | 2026-01-10 15:05:42.586393 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-10 15:05:42.586397 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.559) 0:00:02.199 ****** 2026-01-10 15:05:42.586401 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:05:42.586405 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:05:42.586408 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:05:42.586412 | orchestrator | 2026-01-10 15:05:42.586416 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-10 15:05:42.586420 | orchestrator | Saturday 10 January 2026 15:03:41 +0000 (0:00:03.480) 0:00:05.680 ****** 2026-01-10 15:05:42.586423 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 15:05:42.586427 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-10 15:05:42.586432 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 15:05:42.586436 | orchestrator | mariadb_bootstrap_restart 2026-01-10 15:05:42.586450 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:42.586454 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:42.586458 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:05:42.586462 | orchestrator | 2026-01-10 15:05:42.586465 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 15:05:42.586469 | orchestrator | skipping: no hosts matched 2026-01-10 15:05:42.586473 | orchestrator | 2026-01-10 15:05:42.586477 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 15:05:42.586480 | orchestrator | skipping: no hosts matched 2026-01-10 15:05:42.586484 | orchestrator | 2026-01-10 15:05:42.586488 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 15:05:42.586492 | orchestrator | skipping: no hosts matched 2026-01-10 15:05:42.586495 | orchestrator | 2026-01-10 15:05:42.586499 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 15:05:42.586503 | orchestrator | 2026-01-10 15:05:42.586507 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 15:05:42.586510 | orchestrator | Saturday 10 January 2026 15:05:41 +0000 (0:01:59.635) 0:02:05.316 ****** 2026-01-10 15:05:42.586514 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:42.586518 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:42.586522 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:42.586526 | orchestrator | 2026-01-10 15:05:42.586529 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 15:05:42.586533 | orchestrator | Saturday 10 January 2026 15:05:41 +0000 (0:00:00.313) 0:02:05.629 ****** 2026-01-10 15:05:42.586537 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:05:42.586541 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:05:42.586544 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:05:42.586548 | orchestrator | 2026-01-10 15:05:42.586552 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:05:42.586557 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:05:42.586565 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:05:42.586569 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:05:42.586573 | orchestrator | 2026-01-10 15:05:42.586577 | orchestrator | 2026-01-10 15:05:42.586581 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:05:42.586585 | orchestrator | Saturday 10 January 2026 15:05:42 +0000 (0:00:00.515) 0:02:06.144 ****** 2026-01-10 15:05:42.586588 | orchestrator | =============================================================================== 2026-01-10 15:05:42.586592 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 119.64s 2026-01-10 15:05:42.586606 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.48s 2026-01-10 15:05:42.586610 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-01-10 15:05:42.586614 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-01-10 15:05:42.586618 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.52s 2026-01-10 15:05:42.586622 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-01-10 15:05:42.586626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-10 15:05:42.586630 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-01-10 15:05:42.979355 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-10 15:05:42.985061 | orchestrator | + set -e 2026-01-10 15:05:42.985139 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:05:42.985149 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:05:42.985156 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:05:42.985161 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:05:42.985166 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:05:42.985171 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:05:42.986088 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:05:42.991371 | orchestrator | 2026-01-10 15:05:42.991419 | orchestrator | # OpenStack endpoints 2026-01-10 15:05:42.991427 | orchestrator | 2026-01-10 15:05:42.991434 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:05:42.991440 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:05:42.991449 | orchestrator | + export OS_CLOUD=admin 2026-01-10 15:05:42.991458 | orchestrator | + OS_CLOUD=admin 2026-01-10 15:05:42.991468 | orchestrator | + echo 2026-01-10 15:05:42.991481 | orchestrator | + echo '# OpenStack endpoints' 2026-01-10 15:05:42.991489 | orchestrator | + echo 2026-01-10 15:05:42.991497 | orchestrator | + openstack endpoint list 2026-01-10 15:05:46.476371 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:05:46.476475 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-10 15:05:46.476486 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:05:46.476496 | orchestrator | | 22812bb3468d4308892563792e734c83 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:05:46.476504 | orchestrator | | 288964bcef79437fa443f6ac0ac8c30b | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-10 15:05:46.476529 | orchestrator | | 350148853bd740f29d0e64449c8a94da | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-10 15:05:46.476537 | orchestrator | | 3c76954db34848df98afccf0f9175101 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:05:46.476565 | orchestrator | | 3c7faecef9b149a1924bafe499dbf81b | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-10 15:05:46.476574 | orchestrator | | 5c20bbaceeaa465bab32e81fbbb134d8 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:05:46.476583 | orchestrator | | 69327ccd897a40218ecc743162538d3e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-10 15:05:46.476590 | orchestrator | | 76eebc758c5b497cbf32d47fc3364a6f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-10 15:05:46.476598 | orchestrator | | 78bd409e28c74e45a88abb3e12c6083f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-10 15:05:46.476606 | orchestrator | | 7d0717396a894208a9f8e61ebb47ccbd | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:05:46.476614 | orchestrator | | 86d87d48e7694b89ba07bc4402ceb7a8 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:05:46.476622 | orchestrator | | 8b4d5b2a044246459bf53b48577db09a | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-10 15:05:46.476630 | orchestrator | | b7d96dc1ca124dadaa9046745b6a860b | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-10 15:05:46.476638 | orchestrator | | bc25dbcccbb04977ae2eb84aaafec982 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:05:46.476646 | orchestrator | | bfe3c096d4a640efb540705a147888ee | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-10 15:05:46.476654 | orchestrator | | c1d494d564cd463c983d7f970b1376d5 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-10 15:05:46.476662 | orchestrator | | c48aa13067b74b1c81b3233c034e0d4b | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-10 15:05:46.476670 | orchestrator | | ccf7ce9c1d1a4b66a8490cd4ff402188 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-10 15:05:46.476678 | orchestrator | | d560d19a80ae4c3da67a6f5c12441487 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-10 15:05:46.476686 | orchestrator | | dc42a209c5e148c19388e91d1cafc928 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-10 15:05:46.476713 | orchestrator | | e8968a72b5444644b89eebc460ad8bab | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-10 15:05:46.476729 | orchestrator | | f30d6fca0b1b445a8d2aa3e9ba236e40 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-10 15:05:46.476780 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:05:46.795541 | orchestrator | 2026-01-10 15:05:46.795618 | orchestrator | # Cinder 2026-01-10 15:05:46.795625 | orchestrator | 2026-01-10 15:05:46.795647 | orchestrator | + echo 2026-01-10 15:05:46.795652 | orchestrator | + echo '# Cinder' 2026-01-10 15:05:46.795656 | orchestrator | + echo 2026-01-10 15:05:46.795660 | orchestrator | + openstack volume service list 2026-01-10 15:05:50.502392 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:05:50.502505 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:05:50.502514 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:05:50.502521 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:05:40.000000 | 2026-01-10 15:05:50.502528 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:05:49.000000 | 2026-01-10 15:05:50.502534 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:05:49.000000 | 2026-01-10 15:05:50.502540 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-10T15:05:49.000000 | 2026-01-10 15:05:50.502546 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-10T15:05:43.000000 | 2026-01-10 15:05:50.502552 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-10T15:05:44.000000 | 2026-01-10 15:05:50.502558 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-10T15:05:47.000000 | 2026-01-10 15:05:50.502564 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-10T15:05:48.000000 | 2026-01-10 15:05:50.502570 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-10T15:05:49.000000 | 2026-01-10 15:05:50.502576 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:05:50.812596 | orchestrator | 2026-01-10 15:05:50.812714 | orchestrator | # Neutron 2026-01-10 15:05:50.812725 | orchestrator | 2026-01-10 15:05:50.812733 | orchestrator | + echo 2026-01-10 15:05:50.812740 | orchestrator | + echo '# Neutron' 2026-01-10 15:05:50.812881 | orchestrator | + echo 2026-01-10 15:05:50.812894 | orchestrator | + openstack network agent list 2026-01-10 15:05:53.944108 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:05:53.944204 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-10 15:05:53.944211 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:05:53.944215 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944220 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944224 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944227 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944232 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944236 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-10 15:05:53.944240 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:05:53.944243 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:05:53.944266 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:05:53.944272 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:05:54.279949 | orchestrator | + openstack network service provider list 2026-01-10 15:05:57.000833 | orchestrator | +---------------+------+---------+ 2026-01-10 15:05:57.000919 | orchestrator | | Service Type | Name | Default | 2026-01-10 15:05:57.000926 | orchestrator | +---------------+------+---------+ 2026-01-10 15:05:57.000930 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-10 15:05:57.000934 | orchestrator | +---------------+------+---------+ 2026-01-10 15:05:57.349909 | orchestrator | 2026-01-10 15:05:57.349981 | orchestrator | # Nova 2026-01-10 15:05:57.349989 | orchestrator | 2026-01-10 15:05:57.349995 | orchestrator | + echo 2026-01-10 15:05:57.350001 | orchestrator | + echo '# Nova' 2026-01-10 15:05:57.350006 | orchestrator | + echo 2026-01-10 15:05:57.350048 | orchestrator | + openstack compute service list 2026-01-10 15:06:00.245480 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:06:00.245553 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:06:00.245560 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:06:00.245564 | orchestrator | | e115c382-8912-4953-bceb-f279d02b07c1 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:05:51.000000 | 2026-01-10 15:06:00.245569 | orchestrator | | dfa25d36-8d8d-4891-a6a1-798b4085e700 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:05:52.000000 | 2026-01-10 15:06:00.245573 | orchestrator | | afd7e284-2b63-474c-ac2b-26bbccbd8a0d | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:05:52.000000 | 2026-01-10 15:06:00.245588 | orchestrator | | 74610aed-b420-4c33-b6ad-92c205ae5c97 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-10T15:05:57.000000 | 2026-01-10 15:06:00.245593 | orchestrator | | 6df45b73-7b8b-4c38-b17f-d6f54ef65c7c | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-10T15:05:57.000000 | 2026-01-10 15:06:00.245597 | orchestrator | | b0d743a2-0b04-4e58-9cfe-d61e24919f97 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-10T15:05:57.000000 | 2026-01-10 15:06:00.245601 | orchestrator | | b49a21c0-51db-41b5-a288-588a30be9811 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-10T15:05:56.000000 | 2026-01-10 15:06:00.245604 | orchestrator | | 6348144c-7ada-4c52-9a60-39bf0c2d8140 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-10T15:05:56.000000 | 2026-01-10 15:06:00.245608 | orchestrator | | 3682d0ec-b4f5-4d67-8195-faa3890e86c1 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-10T15:05:56.000000 | 2026-01-10 15:06:00.245612 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:06:00.644219 | orchestrator | + openstack hypervisor list 2026-01-10 15:06:03.465483 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:06:03.465567 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-10 15:06:03.465578 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:06:03.465584 | orchestrator | | bdcbb3ed-8598-463d-a818-9d68b5c80ffc | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-10 15:06:03.465591 | orchestrator | | 50a918c0-2c88-43cd-8e0d-110ebaa066bf | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-10 15:06:03.465598 | orchestrator | | 76a08603-e611-4219-83b2-c14fab5e7e19 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-10 15:06:03.465631 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:06:03.916591 | orchestrator | 2026-01-10 15:06:03.916702 | orchestrator | # Run OpenStack test play 2026-01-10 15:06:03.916721 | orchestrator | 2026-01-10 15:06:03.916735 | orchestrator | + echo 2026-01-10 15:06:03.916750 | orchestrator | + echo '# Run OpenStack test play' 2026-01-10 15:06:03.916765 | orchestrator | + echo 2026-01-10 15:06:03.916877 | orchestrator | + osism apply --environment openstack test 2026-01-10 15:06:06.047321 | orchestrator | 2026-01-10 15:06:06 | INFO  | Trying to run play test in environment openstack 2026-01-10 15:06:16.153190 | orchestrator | 2026-01-10 15:06:16 | INFO  | Task 5e6a4f23-40c2-40a4-9c11-b85b91436471 (test) was prepared for execution. 2026-01-10 15:06:16.153292 | orchestrator | 2026-01-10 15:06:16 | INFO  | It takes a moment until task 5e6a4f23-40c2-40a4-9c11-b85b91436471 (test) has been started and output is visible here. 2026-01-10 15:13:36.197261 | orchestrator | 2026-01-10 15:13:36.197342 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-10 15:13:36.197354 | orchestrator | 2026-01-10 15:13:36.197359 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-10 15:13:36.197364 | orchestrator | Saturday 10 January 2026 15:06:20 +0000 (0:00:00.078) 0:00:00.078 ****** 2026-01-10 15:13:36.197368 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197373 | orchestrator | 2026-01-10 15:13:36.197377 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-10 15:13:36.197381 | orchestrator | Saturday 10 January 2026 15:06:24 +0000 (0:00:03.680) 0:00:03.759 ****** 2026-01-10 15:13:36.197385 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197389 | orchestrator | 2026-01-10 15:13:36.197393 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-10 15:13:36.197398 | orchestrator | Saturday 10 January 2026 15:06:28 +0000 (0:00:04.183) 0:00:07.943 ****** 2026-01-10 15:13:36.197402 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197405 | orchestrator | 2026-01-10 15:13:36.197409 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-10 15:13:36.197413 | orchestrator | Saturday 10 January 2026 15:06:35 +0000 (0:00:06.699) 0:00:14.642 ****** 2026-01-10 15:13:36.197419 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197425 | orchestrator | 2026-01-10 15:13:36.197441 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-10 15:13:36.197449 | orchestrator | Saturday 10 January 2026 15:06:39 +0000 (0:00:04.196) 0:00:18.839 ****** 2026-01-10 15:13:36.197455 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197461 | orchestrator | 2026-01-10 15:13:36.197467 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-10 15:13:36.197474 | orchestrator | Saturday 10 January 2026 15:06:43 +0000 (0:00:04.278) 0:00:23.117 ****** 2026-01-10 15:13:36.197480 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-10 15:13:36.197487 | orchestrator | changed: [localhost] => (item=member) 2026-01-10 15:13:36.197494 | orchestrator | changed: [localhost] => (item=creator) 2026-01-10 15:13:36.197501 | orchestrator | 2026-01-10 15:13:36.197506 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-10 15:13:36.197518 | orchestrator | Saturday 10 January 2026 15:06:55 +0000 (0:00:11.576) 0:00:34.694 ****** 2026-01-10 15:13:36.197523 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197526 | orchestrator | 2026-01-10 15:13:36.197530 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-10 15:13:36.197534 | orchestrator | Saturday 10 January 2026 15:06:59 +0000 (0:00:04.222) 0:00:38.916 ****** 2026-01-10 15:13:36.197538 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197542 | orchestrator | 2026-01-10 15:13:36.197546 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-10 15:13:36.197563 | orchestrator | Saturday 10 January 2026 15:07:04 +0000 (0:00:05.003) 0:00:43.920 ****** 2026-01-10 15:13:36.197567 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197586 | orchestrator | 2026-01-10 15:13:36.197590 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-10 15:13:36.197594 | orchestrator | Saturday 10 January 2026 15:07:08 +0000 (0:00:04.158) 0:00:48.079 ****** 2026-01-10 15:13:36.197597 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197601 | orchestrator | 2026-01-10 15:13:36.197605 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-10 15:13:36.197608 | orchestrator | Saturday 10 January 2026 15:07:12 +0000 (0:00:03.856) 0:00:51.936 ****** 2026-01-10 15:13:36.197612 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197616 | orchestrator | 2026-01-10 15:13:36.197620 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-10 15:13:36.197623 | orchestrator | Saturday 10 January 2026 15:07:16 +0000 (0:00:04.124) 0:00:56.060 ****** 2026-01-10 15:13:36.197627 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197631 | orchestrator | 2026-01-10 15:13:36.197635 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-10 15:13:36.197641 | orchestrator | Saturday 10 January 2026 15:07:20 +0000 (0:00:03.954) 0:01:00.014 ****** 2026-01-10 15:13:36.197647 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197653 | orchestrator | 2026-01-10 15:13:36.197660 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-10 15:13:36.197666 | orchestrator | Saturday 10 January 2026 15:07:25 +0000 (0:00:04.895) 0:01:04.909 ****** 2026-01-10 15:13:36.197672 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197678 | orchestrator | 2026-01-10 15:13:36.197683 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-10 15:13:36.197688 | orchestrator | Saturday 10 January 2026 15:07:31 +0000 (0:00:05.468) 0:01:10.378 ****** 2026-01-10 15:13:36.197695 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197701 | orchestrator | 2026-01-10 15:13:36.197708 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-10 15:13:36.197715 | orchestrator | Saturday 10 January 2026 15:07:42 +0000 (0:00:11.329) 0:01:21.708 ****** 2026-01-10 15:13:36.197722 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:13:36.197727 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:13:36.197730 | orchestrator | 2026-01-10 15:13:36.197734 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:13:36.197739 | orchestrator | 2026-01-10 15:13:36.197743 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:13:36.197747 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:13:36.197751 | orchestrator | 2026-01-10 15:13:36.197756 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:13:36.197760 | orchestrator | 2026-01-10 15:13:36.197764 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:13:36.197768 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:13:36.197773 | orchestrator | 2026-01-10 15:13:36.197777 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:13:36.197781 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:13:36.197785 | orchestrator | 2026-01-10 15:13:36.197789 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-10 15:13:36.197806 | orchestrator | Saturday 10 January 2026 15:12:10 +0000 (0:04:28.251) 0:05:49.960 ****** 2026-01-10 15:13:36.197810 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:13:36.197815 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:13:36.197819 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:13:36.197823 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:13:36.197827 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:13:36.197832 | orchestrator | 2026-01-10 15:13:36.197836 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-10 15:13:36.197840 | orchestrator | Saturday 10 January 2026 15:12:34 +0000 (0:00:23.638) 0:06:13.598 ****** 2026-01-10 15:13:36.197849 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:13:36.197861 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:13:36.197867 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:13:36.197874 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:13:36.197880 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:13:36.197886 | orchestrator | 2026-01-10 15:13:36.197892 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-10 15:13:36.197898 | orchestrator | Saturday 10 January 2026 15:13:10 +0000 (0:00:35.855) 0:06:49.454 ****** 2026-01-10 15:13:36.197904 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197910 | orchestrator | 2026-01-10 15:13:36.197916 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-10 15:13:36.197921 | orchestrator | Saturday 10 January 2026 15:13:16 +0000 (0:00:06.531) 0:06:55.985 ****** 2026-01-10 15:13:36.197927 | orchestrator | changed: [localhost] 2026-01-10 15:13:36.197933 | orchestrator | 2026-01-10 15:13:36.197938 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-10 15:13:36.197945 | orchestrator | Saturday 10 January 2026 15:13:30 +0000 (0:00:13.668) 0:07:09.654 ****** 2026-01-10 15:13:36.197951 | orchestrator | ok: [localhost] 2026-01-10 15:13:36.197958 | orchestrator | 2026-01-10 15:13:36.197965 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-10 15:13:36.197971 | orchestrator | Saturday 10 January 2026 15:13:35 +0000 (0:00:05.541) 0:07:15.195 ****** 2026-01-10 15:13:36.197977 | orchestrator | ok: [localhost] => { 2026-01-10 15:13:36.197984 | orchestrator |  "msg": "192.168.112.147" 2026-01-10 15:13:36.197989 | orchestrator | } 2026-01-10 15:13:36.197996 | orchestrator | 2026-01-10 15:13:36.198004 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:13:36.198062 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 15:13:36.198071 | orchestrator | 2026-01-10 15:13:36.198075 | orchestrator | 2026-01-10 15:13:36.198080 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:13:36.198084 | orchestrator | Saturday 10 January 2026 15:13:35 +0000 (0:00:00.042) 0:07:15.237 ****** 2026-01-10 15:13:36.198110 | orchestrator | =============================================================================== 2026-01-10 15:13:36.198117 | orchestrator | Create test instances ------------------------------------------------- 268.25s 2026-01-10 15:13:36.198122 | orchestrator | Add tag to instances --------------------------------------------------- 35.86s 2026-01-10 15:13:36.198127 | orchestrator | Add metadata to instances ---------------------------------------------- 23.64s 2026-01-10 15:13:36.198131 | orchestrator | Attach test volume ----------------------------------------------------- 13.67s 2026-01-10 15:13:36.198136 | orchestrator | Add member roles to user test ------------------------------------------ 11.58s 2026-01-10 15:13:36.198140 | orchestrator | Create test router ----------------------------------------------------- 11.33s 2026-01-10 15:13:36.198144 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.70s 2026-01-10 15:13:36.198148 | orchestrator | Create test volume ------------------------------------------------------ 6.53s 2026-01-10 15:13:36.198153 | orchestrator | Create floating ip address ---------------------------------------------- 5.54s 2026-01-10 15:13:36.198157 | orchestrator | Create test subnet ------------------------------------------------------ 5.47s 2026-01-10 15:13:36.198161 | orchestrator | Create ssh security group ----------------------------------------------- 5.00s 2026-01-10 15:13:36.198166 | orchestrator | Create test network ----------------------------------------------------- 4.90s 2026-01-10 15:13:36.198170 | orchestrator | Create test user -------------------------------------------------------- 4.28s 2026-01-10 15:13:36.198175 | orchestrator | Create test server group ------------------------------------------------ 4.22s 2026-01-10 15:13:36.198179 | orchestrator | Create test project ----------------------------------------------------- 4.20s 2026-01-10 15:13:36.198188 | orchestrator | Create test-admin user -------------------------------------------------- 4.18s 2026-01-10 15:13:36.198192 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.16s 2026-01-10 15:13:36.198196 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.12s 2026-01-10 15:13:36.198200 | orchestrator | Create test keypair ----------------------------------------------------- 3.95s 2026-01-10 15:13:36.198203 | orchestrator | Create icmp security group ---------------------------------------------- 3.86s 2026-01-10 15:13:36.649502 | orchestrator | + server_list 2026-01-10 15:13:36.649561 | orchestrator | + openstack --os-cloud test server list 2026-01-10 15:13:40.611547 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:13:40.611634 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-10 15:13:40.611642 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:13:40.611647 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | test=192.168.112.169, 192.168.200.90 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:13:40.611651 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | test=192.168.112.170, 192.168.200.206 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:13:40.611656 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | test=192.168.112.156, 192.168.200.12 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:13:40.611659 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | test=192.168.112.171, 192.168.200.224 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:13:40.611663 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | test=192.168.112.147, 192.168.200.144 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:13:40.611667 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:13:40.978540 | orchestrator | + openstack --os-cloud test server show test 2026-01-10 15:13:44.391855 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:44.391954 | orchestrator | | Field | Value | 2026-01-10 15:13:44.391965 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:44.391973 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:13:44.391980 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:13:44.392003 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:13:44.392009 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-10 15:13:44.392016 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:13:44.392022 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:13:44.392043 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:13:44.392050 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:13:44.392062 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:13:44.392071 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:13:44.392077 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:13:44.392089 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:13:44.392179 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:13:44.392190 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:13:44.392196 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:13:44.392202 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:08:31.000000 | 2026-01-10 15:13:44.392216 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:13:44.392222 | orchestrator | | accessIPv4 | | 2026-01-10 15:13:44.392229 | orchestrator | | accessIPv6 | | 2026-01-10 15:13:44.392242 | orchestrator | | addresses | test=192.168.112.147, 192.168.200.144 | 2026-01-10 15:13:44.392255 | orchestrator | | config_drive | | 2026-01-10 15:13:44.392261 | orchestrator | | created | 2026-01-10T15:07:50Z | 2026-01-10 15:13:44.392267 | orchestrator | | description | None | 2026-01-10 15:13:44.392273 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:13:44.392280 | orchestrator | | hostId | 38098cc4df1d216b9b57798c56cb6ca6e453b6818f9fc9089d8a8641 | 2026-01-10 15:13:44.392285 | orchestrator | | host_status | None | 2026-01-10 15:13:44.392297 | orchestrator | | id | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | 2026-01-10 15:13:44.392304 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:13:44.392309 | orchestrator | | key_name | test | 2026-01-10 15:13:44.392324 | orchestrator | | locked | False | 2026-01-10 15:13:44.392331 | orchestrator | | locked_reason | None | 2026-01-10 15:13:44.392336 | orchestrator | | name | test | 2026-01-10 15:13:44.392342 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:13:44.392348 | orchestrator | | progress | 0 | 2026-01-10 15:13:44.392355 | orchestrator | | project_id | 8db09e9a484d4ab296a9ad3bca699551 | 2026-01-10 15:13:44.392360 | orchestrator | | properties | hostname='test' | 2026-01-10 15:13:44.392371 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:13:44.392377 | orchestrator | | | name='ssh' | 2026-01-10 15:13:44.392381 | orchestrator | | server_groups | None | 2026-01-10 15:13:44.392392 | orchestrator | | status | ACTIVE | 2026-01-10 15:13:44.392396 | orchestrator | | tags | test | 2026-01-10 15:13:44.392401 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:13:44.392405 | orchestrator | | updated | 2026-01-10T15:12:15Z | 2026-01-10 15:13:44.392410 | orchestrator | | user_id | ea7bc56375514802a34b840f2dbb354e | 2026-01-10 15:13:44.392415 | orchestrator | | volumes_attached | delete_on_termination='True', id='e0f90f07-aba3-4aa2-bf43-1941c0fff6ac' | 2026-01-10 15:13:44.392422 | orchestrator | | | delete_on_termination='False', id='39300e7b-18a7-4869-94f6-1ee91229ced9' | 2026-01-10 15:13:44.397162 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:44.744285 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-10 15:13:48.092829 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:48.092919 | orchestrator | | Field | Value | 2026-01-10 15:13:48.092938 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:48.092943 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:13:48.092948 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:13:48.092953 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:13:48.092957 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-10 15:13:48.092962 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:13:48.092967 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:13:48.092983 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:13:48.092992 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:13:48.092996 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:13:48.093000 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:13:48.093004 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:13:48.093013 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:13:48.093017 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:13:48.093021 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:13:48.093025 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:13:48.093029 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:09:28.000000 | 2026-01-10 15:13:48.093042 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:13:48.093047 | orchestrator | | accessIPv4 | | 2026-01-10 15:13:48.093053 | orchestrator | | accessIPv6 | | 2026-01-10 15:13:48.093057 | orchestrator | | addresses | test=192.168.112.171, 192.168.200.224 | 2026-01-10 15:13:48.093061 | orchestrator | | config_drive | | 2026-01-10 15:13:48.093065 | orchestrator | | created | 2026-01-10T15:08:50Z | 2026-01-10 15:13:48.093069 | orchestrator | | description | None | 2026-01-10 15:13:48.093073 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:13:48.093078 | orchestrator | | hostId | 0c2523448cac06d1bd568598a021d605c94b43ab609ca5b3a86d1c47 | 2026-01-10 15:13:48.093085 | orchestrator | | host_status | None | 2026-01-10 15:13:48.093093 | orchestrator | | id | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | 2026-01-10 15:13:48.093136 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:13:48.093143 | orchestrator | | key_name | test | 2026-01-10 15:13:48.093147 | orchestrator | | locked | False | 2026-01-10 15:13:48.093151 | orchestrator | | locked_reason | None | 2026-01-10 15:13:48.093155 | orchestrator | | name | test-1 | 2026-01-10 15:13:48.093159 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:13:48.093163 | orchestrator | | progress | 0 | 2026-01-10 15:13:48.093167 | orchestrator | | project_id | 8db09e9a484d4ab296a9ad3bca699551 | 2026-01-10 15:13:48.093174 | orchestrator | | properties | hostname='test-1' | 2026-01-10 15:13:48.093182 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:13:48.093203 | orchestrator | | | name='ssh' | 2026-01-10 15:13:48.093210 | orchestrator | | server_groups | None | 2026-01-10 15:13:48.093214 | orchestrator | | status | ACTIVE | 2026-01-10 15:13:48.093218 | orchestrator | | tags | test | 2026-01-10 15:13:48.093229 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:13:48.093233 | orchestrator | | updated | 2026-01-10T15:12:20Z | 2026-01-10 15:13:48.093237 | orchestrator | | user_id | ea7bc56375514802a34b840f2dbb354e | 2026-01-10 15:13:48.093250 | orchestrator | | volumes_attached | delete_on_termination='True', id='9c3aa0ab-37cd-4998-b2f4-0ac21eb11ed9' | 2026-01-10 15:13:48.098309 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:48.408212 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-10 15:13:51.508024 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:51.508200 | orchestrator | | Field | Value | 2026-01-10 15:13:51.508224 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:51.508233 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:13:51.508287 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:13:51.508296 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:13:51.508310 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-10 15:13:51.508335 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:13:51.508342 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:13:51.508366 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:13:51.508374 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:13:51.508381 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:13:51.508391 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:13:51.508397 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:13:51.508404 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:13:51.508411 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:13:51.508422 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:13:51.508429 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:13:51.508435 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:10:23.000000 | 2026-01-10 15:13:51.508447 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:13:51.508454 | orchestrator | | accessIPv4 | | 2026-01-10 15:13:51.508460 | orchestrator | | accessIPv6 | | 2026-01-10 15:13:51.508467 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.12 | 2026-01-10 15:13:51.508474 | orchestrator | | config_drive | | 2026-01-10 15:13:51.508487 | orchestrator | | created | 2026-01-10T15:09:47Z | 2026-01-10 15:13:51.508498 | orchestrator | | description | None | 2026-01-10 15:13:51.508505 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:13:51.508512 | orchestrator | | hostId | 1701ebc3110fe92bacf5512a34b5fe2861e80bda8311a3f3e9cf6903 | 2026-01-10 15:13:51.508520 | orchestrator | | host_status | None | 2026-01-10 15:13:51.508532 | orchestrator | | id | e298516c-1be8-4521-8f5f-6bc75b56ab5a | 2026-01-10 15:13:51.508539 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:13:51.508545 | orchestrator | | key_name | test | 2026-01-10 15:13:51.508554 | orchestrator | | locked | False | 2026-01-10 15:13:51.508560 | orchestrator | | locked_reason | None | 2026-01-10 15:13:51.508566 | orchestrator | | name | test-2 | 2026-01-10 15:13:51.508576 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:13:51.508583 | orchestrator | | progress | 0 | 2026-01-10 15:13:51.508589 | orchestrator | | project_id | 8db09e9a484d4ab296a9ad3bca699551 | 2026-01-10 15:13:51.508596 | orchestrator | | properties | hostname='test-2' | 2026-01-10 15:13:51.508607 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:13:51.508614 | orchestrator | | | name='ssh' | 2026-01-10 15:13:51.508625 | orchestrator | | server_groups | None | 2026-01-10 15:13:51.508633 | orchestrator | | status | ACTIVE | 2026-01-10 15:13:51.508641 | orchestrator | | tags | test | 2026-01-10 15:13:51.508653 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:13:51.508659 | orchestrator | | updated | 2026-01-10T15:12:24Z | 2026-01-10 15:13:51.508666 | orchestrator | | user_id | ea7bc56375514802a34b840f2dbb354e | 2026-01-10 15:13:51.508674 | orchestrator | | volumes_attached | delete_on_termination='True', id='fdb7a305-8aff-4485-941d-4403e92f569f' | 2026-01-10 15:13:51.516337 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:51.834442 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-10 15:13:54.932718 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:54.932807 | orchestrator | | Field | Value | 2026-01-10 15:13:54.932839 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:54.932852 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:13:54.932884 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:13:54.932896 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:13:54.932907 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-10 15:13:54.932918 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:13:54.932929 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:13:54.932966 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:13:54.932988 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:13:54.933001 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:13:54.933017 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:13:54.933038 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:13:54.933049 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:13:54.933061 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:13:54.933072 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:13:54.933083 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:13:54.933094 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:11:11.000000 | 2026-01-10 15:13:54.933155 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:13:54.933175 | orchestrator | | accessIPv4 | | 2026-01-10 15:13:54.933194 | orchestrator | | accessIPv6 | | 2026-01-10 15:13:54.933220 | orchestrator | | addresses | test=192.168.112.170, 192.168.200.206 | 2026-01-10 15:13:54.933251 | orchestrator | | config_drive | | 2026-01-10 15:13:54.933273 | orchestrator | | created | 2026-01-10T15:10:42Z | 2026-01-10 15:13:54.933294 | orchestrator | | description | None | 2026-01-10 15:13:54.933315 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:13:54.933330 | orchestrator | | hostId | 38098cc4df1d216b9b57798c56cb6ca6e453b6818f9fc9089d8a8641 | 2026-01-10 15:13:54.933343 | orchestrator | | host_status | None | 2026-01-10 15:13:54.933364 | orchestrator | | id | 9b61d686-360e-45c1-adc4-2f20f560bb6f | 2026-01-10 15:13:54.933377 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:13:54.933391 | orchestrator | | key_name | test | 2026-01-10 15:13:54.933424 | orchestrator | | locked | False | 2026-01-10 15:13:54.933445 | orchestrator | | locked_reason | None | 2026-01-10 15:13:54.933466 | orchestrator | | name | test-3 | 2026-01-10 15:13:54.933486 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:13:54.933507 | orchestrator | | progress | 0 | 2026-01-10 15:13:54.933528 | orchestrator | | project_id | 8db09e9a484d4ab296a9ad3bca699551 | 2026-01-10 15:13:54.933549 | orchestrator | | properties | hostname='test-3' | 2026-01-10 15:13:54.933575 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:13:54.933587 | orchestrator | | | name='ssh' | 2026-01-10 15:13:54.933605 | orchestrator | | server_groups | None | 2026-01-10 15:13:54.933976 | orchestrator | | status | ACTIVE | 2026-01-10 15:13:54.933991 | orchestrator | | tags | test | 2026-01-10 15:13:54.934002 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:13:54.934079 | orchestrator | | updated | 2026-01-10T15:12:29Z | 2026-01-10 15:13:54.934094 | orchestrator | | user_id | ea7bc56375514802a34b840f2dbb354e | 2026-01-10 15:13:54.934135 | orchestrator | | volumes_attached | delete_on_termination='True', id='b9e25919-7964-4bd6-a04d-d620e51ded6f' | 2026-01-10 15:13:54.938312 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:55.297059 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-10 15:13:58.394738 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:58.394857 | orchestrator | | Field | Value | 2026-01-10 15:13:58.394870 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:58.394878 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:13:58.394885 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:13:58.394893 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:13:58.394900 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-10 15:13:58.394906 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:13:58.394913 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:13:58.394929 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:13:58.394942 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:13:58.394946 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:13:58.394950 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:13:58.394955 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:13:58.394959 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:13:58.394963 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:13:58.394967 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:13:58.394971 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:13:58.394974 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:11:57.000000 | 2026-01-10 15:13:58.394982 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:13:58.394993 | orchestrator | | accessIPv4 | | 2026-01-10 15:13:58.394997 | orchestrator | | accessIPv6 | | 2026-01-10 15:13:58.395001 | orchestrator | | addresses | test=192.168.112.169, 192.168.200.90 | 2026-01-10 15:13:58.395005 | orchestrator | | config_drive | | 2026-01-10 15:13:58.395009 | orchestrator | | created | 2026-01-10T15:11:32Z | 2026-01-10 15:13:58.395013 | orchestrator | | description | None | 2026-01-10 15:13:58.395017 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:13:58.395021 | orchestrator | | hostId | 1701ebc3110fe92bacf5512a34b5fe2861e80bda8311a3f3e9cf6903 | 2026-01-10 15:13:58.395025 | orchestrator | | host_status | None | 2026-01-10 15:13:58.395036 | orchestrator | | id | 13541841-1852-4b59-8b0b-7c2badb02c22 | 2026-01-10 15:13:58.395043 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:13:58.395047 | orchestrator | | key_name | test | 2026-01-10 15:13:58.395051 | orchestrator | | locked | False | 2026-01-10 15:13:58.395055 | orchestrator | | locked_reason | None | 2026-01-10 15:13:58.395058 | orchestrator | | name | test-4 | 2026-01-10 15:13:58.395062 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:13:58.395066 | orchestrator | | progress | 0 | 2026-01-10 15:13:58.395070 | orchestrator | | project_id | 8db09e9a484d4ab296a9ad3bca699551 | 2026-01-10 15:13:58.395077 | orchestrator | | properties | hostname='test-4' | 2026-01-10 15:13:58.395087 | orchestrator | | security_groups | name='icmp' | 2026-01-10 15:13:58.395091 | orchestrator | | | name='ssh' | 2026-01-10 15:13:58.395095 | orchestrator | | server_groups | None | 2026-01-10 15:13:58.395124 | orchestrator | | status | ACTIVE | 2026-01-10 15:13:58.395129 | orchestrator | | tags | test | 2026-01-10 15:13:58.395136 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:13:58.395142 | orchestrator | | updated | 2026-01-10T15:12:33Z | 2026-01-10 15:13:58.395148 | orchestrator | | user_id | ea7bc56375514802a34b840f2dbb354e | 2026-01-10 15:13:58.395158 | orchestrator | | volumes_attached | delete_on_termination='True', id='b0ae236a-dcb7-4d2c-8437-f7e06a0c633d' | 2026-01-10 15:13:58.401006 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:13:58.750548 | orchestrator | + server_ping 2026-01-10 15:13:58.751660 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:13:58.751726 | orchestrator | ++ tr -d '\r' 2026-01-10 15:14:02.184720 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:14:02.184814 | orchestrator | + ping -c3 192.168.112.156 2026-01-10 15:14:02.204858 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-01-10 15:14:02.204937 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.31 ms 2026-01-10 15:14:03.203745 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.41 ms 2026-01-10 15:14:04.204567 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.60 ms 2026-01-10 15:14:04.204653 | orchestrator | 2026-01-10 15:14:04.204660 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-01-10 15:14:04.204666 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:14:04.204671 | orchestrator | rtt min/avg/max/mdev = 1.601/3.105/5.306/1.590 ms 2026-01-10 15:14:04.205402 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:14:04.205439 | orchestrator | + ping -c3 192.168.112.147 2026-01-10 15:14:04.218953 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2026-01-10 15:14:04.219026 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=8.29 ms 2026-01-10 15:14:05.214174 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=1.90 ms 2026-01-10 15:14:06.214869 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.90 ms 2026-01-10 15:14:06.214944 | orchestrator | 2026-01-10 15:14:06.214951 | orchestrator | --- 192.168.112.147 ping statistics --- 2026-01-10 15:14:06.214958 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:14:06.214963 | orchestrator | rtt min/avg/max/mdev = 1.902/4.033/8.293/3.012 ms 2026-01-10 15:14:06.215773 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:14:06.215820 | orchestrator | + ping -c3 192.168.112.170 2026-01-10 15:14:06.227753 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-01-10 15:14:06.227843 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=7.30 ms 2026-01-10 15:14:07.224631 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.63 ms 2026-01-10 15:14:08.225881 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=2.03 ms 2026-01-10 15:14:08.225975 | orchestrator | 2026-01-10 15:14:08.225985 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-01-10 15:14:08.225992 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:14:08.225996 | orchestrator | rtt min/avg/max/mdev = 2.030/3.984/7.298/2.355 ms 2026-01-10 15:14:08.226437 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:14:08.226495 | orchestrator | + ping -c3 192.168.112.169 2026-01-10 15:14:08.238137 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-01-10 15:14:08.238231 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=7.22 ms 2026-01-10 15:14:09.234583 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.39 ms 2026-01-10 15:14:10.236740 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.68 ms 2026-01-10 15:14:10.236835 | orchestrator | 2026-01-10 15:14:10.236846 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-01-10 15:14:10.236854 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:14:10.236861 | orchestrator | rtt min/avg/max/mdev = 1.675/3.763/7.222/2.463 ms 2026-01-10 15:14:10.237530 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:14:10.237566 | orchestrator | + ping -c3 192.168.112.171 2026-01-10 15:14:10.249885 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-01-10 15:14:10.249980 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=6.27 ms 2026-01-10 15:14:11.248583 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.97 ms 2026-01-10 15:14:12.246894 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=1.36 ms 2026-01-10 15:14:12.246951 | orchestrator | 2026-01-10 15:14:12.246958 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-01-10 15:14:12.246963 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:14:12.246967 | orchestrator | rtt min/avg/max/mdev = 1.358/3.533/6.272/2.045 ms 2026-01-10 15:14:12.248580 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:14:12.248624 | orchestrator | + compute_list 2026-01-10 15:14:12.248629 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:14:16.024999 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:16.025067 | orchestrator | | ID | Name | Status | 2026-01-10 15:14:16.025077 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:14:16.025087 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | 2026-01-10 15:14:16.025094 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:16.430337 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:14:20.023648 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:20.023711 | orchestrator | | ID | Name | Status | 2026-01-10 15:14:20.023721 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:14:20.023728 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | 2026-01-10 15:14:20.023734 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | 2026-01-10 15:14:20.023741 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:20.411534 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:14:24.254815 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:24.254904 | orchestrator | | ID | Name | Status | 2026-01-10 15:14:24.254911 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:14:24.254915 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | 2026-01-10 15:14:24.254920 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | 2026-01-10 15:14:24.254924 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:14:24.689506 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-01-10 15:14:28.193021 | orchestrator | 2026-01-10 15:14:28 | INFO  | Live migrating server 9b61d686-360e-45c1-adc4-2f20f560bb6f 2026-01-10 15:14:41.295163 | orchestrator | 2026-01-10 15:14:41 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:43.738581 | orchestrator | 2026-01-10 15:14:43 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:46.447269 | orchestrator | 2026-01-10 15:14:46 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:48.912784 | orchestrator | 2026-01-10 15:14:48 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:51.135032 | orchestrator | 2026-01-10 15:14:51 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:53.435279 | orchestrator | 2026-01-10 15:14:53 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:55.779538 | orchestrator | 2026-01-10 15:14:55 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:14:58.201362 | orchestrator | 2026-01-10 15:14:58 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:15:00.426433 | orchestrator | 2026-01-10 15:15:00 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) completed with status ACTIVE 2026-01-10 15:15:00.426496 | orchestrator | 2026-01-10 15:15:00 | INFO  | Live migrating server 03fe9a91-ef93-4ec8-8e85-710d07cf142e 2026-01-10 15:15:12.336038 | orchestrator | 2026-01-10 15:15:12 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:14.755865 | orchestrator | 2026-01-10 15:15:14 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:17.034827 | orchestrator | 2026-01-10 15:15:17 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:19.342847 | orchestrator | 2026-01-10 15:15:19 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:21.693197 | orchestrator | 2026-01-10 15:15:21 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:24.105820 | orchestrator | 2026-01-10 15:15:24 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:26.314459 | orchestrator | 2026-01-10 15:15:26 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:28.539924 | orchestrator | 2026-01-10 15:15:28 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:30.856929 | orchestrator | 2026-01-10 15:15:30 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:33.150677 | orchestrator | 2026-01-10 15:15:33 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:15:35.508237 | orchestrator | 2026-01-10 15:15:35 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) completed with status ACTIVE 2026-01-10 15:15:35.902289 | orchestrator | + compute_list 2026-01-10 15:15:35.902361 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:15:39.388331 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:15:39.388413 | orchestrator | | ID | Name | Status | 2026-01-10 15:15:39.388419 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:15:39.388424 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | 2026-01-10 15:15:39.388429 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | 2026-01-10 15:15:39.388446 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | 2026-01-10 15:15:39.388450 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:15:39.766283 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:15:42.702262 | orchestrator | +------+--------+----------+ 2026-01-10 15:15:42.702337 | orchestrator | | ID | Name | Status | 2026-01-10 15:15:42.702343 | orchestrator | |------+--------+----------| 2026-01-10 15:15:42.702347 | orchestrator | +------+--------+----------+ 2026-01-10 15:15:43.076263 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:15:46.433991 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:15:46.434164 | orchestrator | | ID | Name | Status | 2026-01-10 15:15:46.434267 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:15:46.434279 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | 2026-01-10 15:15:46.434285 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | 2026-01-10 15:15:46.434292 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:15:46.832572 | orchestrator | + server_ping 2026-01-10 15:15:46.833905 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:15:46.833957 | orchestrator | ++ tr -d '\r' 2026-01-10 15:15:49.906296 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:15:49.906370 | orchestrator | + ping -c3 192.168.112.156 2026-01-10 15:15:49.915587 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-01-10 15:15:49.915660 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.18 ms 2026-01-10 15:15:50.914547 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.50 ms 2026-01-10 15:15:51.916831 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.99 ms 2026-01-10 15:15:51.916924 | orchestrator | 2026-01-10 15:15:51.916935 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-01-10 15:15:51.916944 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:15:51.916952 | orchestrator | rtt min/avg/max/mdev = 1.986/3.224/5.183/1.401 ms 2026-01-10 15:15:51.916960 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:15:51.916967 | orchestrator | + ping -c3 192.168.112.147 2026-01-10 15:15:51.927621 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2026-01-10 15:15:51.927702 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=5.66 ms 2026-01-10 15:15:52.924829 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=1.89 ms 2026-01-10 15:15:53.926294 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.85 ms 2026-01-10 15:15:53.926397 | orchestrator | 2026-01-10 15:15:53.926412 | orchestrator | --- 192.168.112.147 ping statistics --- 2026-01-10 15:15:53.926423 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:15:53.926432 | orchestrator | rtt min/avg/max/mdev = 1.851/3.134/5.661/1.786 ms 2026-01-10 15:15:53.927142 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:15:53.927166 | orchestrator | + ping -c3 192.168.112.170 2026-01-10 15:15:53.938469 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-01-10 15:15:53.938541 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=5.23 ms 2026-01-10 15:15:54.936820 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.15 ms 2026-01-10 15:15:55.937575 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=1.95 ms 2026-01-10 15:15:55.937650 | orchestrator | 2026-01-10 15:15:55.937658 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-01-10 15:15:55.937664 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:15:55.937669 | orchestrator | rtt min/avg/max/mdev = 1.947/3.109/5.231/1.502 ms 2026-01-10 15:15:55.937907 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:15:55.937922 | orchestrator | + ping -c3 192.168.112.169 2026-01-10 15:15:55.948459 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-01-10 15:15:55.948539 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.56 ms 2026-01-10 15:15:56.944059 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.22 ms 2026-01-10 15:15:57.945744 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.88 ms 2026-01-10 15:15:57.945820 | orchestrator | 2026-01-10 15:15:57.945827 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-01-10 15:15:57.945833 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:15:57.945837 | orchestrator | rtt min/avg/max/mdev = 1.878/3.552/6.560/2.131 ms 2026-01-10 15:15:57.946609 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:15:57.946676 | orchestrator | + ping -c3 192.168.112.171 2026-01-10 15:15:57.956809 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-01-10 15:15:57.956878 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=5.32 ms 2026-01-10 15:15:58.955430 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.18 ms 2026-01-10 15:15:59.957495 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=1.95 ms 2026-01-10 15:15:59.957568 | orchestrator | 2026-01-10 15:15:59.957575 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-01-10 15:15:59.957581 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:15:59.957585 | orchestrator | rtt min/avg/max/mdev = 1.947/3.149/5.323/1.539 ms 2026-01-10 15:15:59.957590 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-01-10 15:16:03.264823 | orchestrator | 2026-01-10 15:16:03 | INFO  | Live migrating server 13541841-1852-4b59-8b0b-7c2badb02c22 2026-01-10 15:16:13.692602 | orchestrator | 2026-01-10 15:16:13 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:16.072322 | orchestrator | 2026-01-10 15:16:16 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:18.435662 | orchestrator | 2026-01-10 15:16:18 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:20.786801 | orchestrator | 2026-01-10 15:16:20 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:23.074166 | orchestrator | 2026-01-10 15:16:23 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:25.501629 | orchestrator | 2026-01-10 15:16:25 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:27.811016 | orchestrator | 2026-01-10 15:16:27 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:30.097311 | orchestrator | 2026-01-10 15:16:30 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:32.362685 | orchestrator | 2026-01-10 15:16:32 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:16:34.730573 | orchestrator | 2026-01-10 15:16:34 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) completed with status ACTIVE 2026-01-10 15:16:34.730703 | orchestrator | 2026-01-10 15:16:34 | INFO  | Live migrating server e298516c-1be8-4521-8f5f-6bc75b56ab5a 2026-01-10 15:16:45.466564 | orchestrator | 2026-01-10 15:16:45 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:47.815696 | orchestrator | 2026-01-10 15:16:47 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:50.194775 | orchestrator | 2026-01-10 15:16:50 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:52.548540 | orchestrator | 2026-01-10 15:16:52 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:54.821196 | orchestrator | 2026-01-10 15:16:54 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:57.187350 | orchestrator | 2026-01-10 15:16:57 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:16:59.595870 | orchestrator | 2026-01-10 15:16:59 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:17:01.906694 | orchestrator | 2026-01-10 15:17:01 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:17:04.203463 | orchestrator | 2026-01-10 15:17:04 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) completed with status ACTIVE 2026-01-10 15:17:04.595691 | orchestrator | + compute_list 2026-01-10 15:17:04.595788 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:17:08.212619 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:08.212707 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:08.212715 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:17:08.212721 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | 2026-01-10 15:17:08.212726 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | 2026-01-10 15:17:08.212731 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | 2026-01-10 15:17:08.212736 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | 2026-01-10 15:17:08.212741 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | 2026-01-10 15:17:08.212746 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:08.586598 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:17:11.506968 | orchestrator | +------+--------+----------+ 2026-01-10 15:17:11.507061 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:11.507070 | orchestrator | |------+--------+----------| 2026-01-10 15:17:11.507078 | orchestrator | +------+--------+----------+ 2026-01-10 15:17:11.895937 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:17:14.911362 | orchestrator | +------+--------+----------+ 2026-01-10 15:17:14.911433 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:14.911440 | orchestrator | |------+--------+----------| 2026-01-10 15:17:14.911444 | orchestrator | +------+--------+----------+ 2026-01-10 15:17:15.387031 | orchestrator | + server_ping 2026-01-10 15:17:15.389546 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:17:15.389614 | orchestrator | ++ tr -d '\r' 2026-01-10 15:17:18.895122 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:18.895207 | orchestrator | + ping -c3 192.168.112.156 2026-01-10 15:17:18.903512 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-01-10 15:17:18.903581 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.66 ms 2026-01-10 15:17:19.901669 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.45 ms 2026-01-10 15:17:20.903614 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=2.05 ms 2026-01-10 15:17:20.903711 | orchestrator | 2026-01-10 15:17:20.903726 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-01-10 15:17:20.903738 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:20.903749 | orchestrator | rtt min/avg/max/mdev = 2.047/3.388/5.663/1.617 ms 2026-01-10 15:17:20.903760 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:20.903770 | orchestrator | + ping -c3 192.168.112.147 2026-01-10 15:17:20.916255 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2026-01-10 15:17:20.916332 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=7.07 ms 2026-01-10 15:17:21.913407 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.56 ms 2026-01-10 15:17:22.914601 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.95 ms 2026-01-10 15:17:22.914691 | orchestrator | 2026-01-10 15:17:22.914699 | orchestrator | --- 192.168.112.147 ping statistics --- 2026-01-10 15:17:22.914707 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:22.914714 | orchestrator | rtt min/avg/max/mdev = 1.949/3.859/7.065/2.280 ms 2026-01-10 15:17:22.915177 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:22.915201 | orchestrator | + ping -c3 192.168.112.170 2026-01-10 15:17:22.925176 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-01-10 15:17:22.925247 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=4.79 ms 2026-01-10 15:17:23.924921 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.26 ms 2026-01-10 15:17:24.926622 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=1.76 ms 2026-01-10 15:17:24.926718 | orchestrator | 2026-01-10 15:17:24.926735 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-01-10 15:17:24.926751 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:17:24.926766 | orchestrator | rtt min/avg/max/mdev = 1.758/2.934/4.786/1.325 ms 2026-01-10 15:17:24.927055 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:24.927075 | orchestrator | + ping -c3 192.168.112.169 2026-01-10 15:17:24.940439 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-01-10 15:17:24.940534 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=8.33 ms 2026-01-10 15:17:25.936538 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.77 ms 2026-01-10 15:17:26.936803 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.99 ms 2026-01-10 15:17:26.938482 | orchestrator | 2026-01-10 15:17:26.938548 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-01-10 15:17:26.938561 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:26.938570 | orchestrator | rtt min/avg/max/mdev = 1.992/4.363/8.333/2.824 ms 2026-01-10 15:17:26.938579 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:26.938586 | orchestrator | + ping -c3 192.168.112.171 2026-01-10 15:17:26.951602 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-01-10 15:17:26.951697 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=6.22 ms 2026-01-10 15:17:27.949431 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=3.15 ms 2026-01-10 15:17:28.950887 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.37 ms 2026-01-10 15:17:28.950984 | orchestrator | 2026-01-10 15:17:28.951008 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-01-10 15:17:28.951022 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:17:28.951033 | orchestrator | rtt min/avg/max/mdev = 2.373/3.915/6.222/1.661 ms 2026-01-10 15:17:28.951413 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-01-10 15:17:32.536446 | orchestrator | 2026-01-10 15:17:32 | INFO  | Live migrating server 13541841-1852-4b59-8b0b-7c2badb02c22 2026-01-10 15:17:45.147406 | orchestrator | 2026-01-10 15:17:45 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:47.480666 | orchestrator | 2026-01-10 15:17:47 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:49.851622 | orchestrator | 2026-01-10 15:17:49 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:52.173965 | orchestrator | 2026-01-10 15:17:52 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:54.427516 | orchestrator | 2026-01-10 15:17:54 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:56.965172 | orchestrator | 2026-01-10 15:17:56 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:17:59.236912 | orchestrator | 2026-01-10 15:17:59 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:18:01.575870 | orchestrator | 2026-01-10 15:18:01 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:18:03.875864 | orchestrator | 2026-01-10 15:18:03 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) completed with status ACTIVE 2026-01-10 15:18:03.875933 | orchestrator | 2026-01-10 15:18:03 | INFO  | Live migrating server 9b61d686-360e-45c1-adc4-2f20f560bb6f 2026-01-10 15:18:14.565928 | orchestrator | 2026-01-10 15:18:14 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:16.922761 | orchestrator | 2026-01-10 15:18:16 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:19.309713 | orchestrator | 2026-01-10 15:18:19 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:21.617249 | orchestrator | 2026-01-10 15:18:21 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:23.923001 | orchestrator | 2026-01-10 15:18:23 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:26.301212 | orchestrator | 2026-01-10 15:18:26 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:28.707049 | orchestrator | 2026-01-10 15:18:28 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:31.036939 | orchestrator | 2026-01-10 15:18:31 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:18:33.537978 | orchestrator | 2026-01-10 15:18:33 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) completed with status ACTIVE 2026-01-10 15:18:33.538169 | orchestrator | 2026-01-10 15:18:33 | INFO  | Live migrating server e298516c-1be8-4521-8f5f-6bc75b56ab5a 2026-01-10 15:18:44.804366 | orchestrator | 2026-01-10 15:18:44 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:47.241421 | orchestrator | 2026-01-10 15:18:47 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:49.636744 | orchestrator | 2026-01-10 15:18:49 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:51.946294 | orchestrator | 2026-01-10 15:18:51 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:54.257534 | orchestrator | 2026-01-10 15:18:54 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:56.552646 | orchestrator | 2026-01-10 15:18:56 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:18:58.849655 | orchestrator | 2026-01-10 15:18:58 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:19:01.224938 | orchestrator | 2026-01-10 15:19:01 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:19:03.561485 | orchestrator | 2026-01-10 15:19:03 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) completed with status ACTIVE 2026-01-10 15:19:03.561570 | orchestrator | 2026-01-10 15:19:03 | INFO  | Live migrating server aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e 2026-01-10 15:19:13.696182 | orchestrator | 2026-01-10 15:19:13 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:16.063458 | orchestrator | 2026-01-10 15:19:16 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:18.439359 | orchestrator | 2026-01-10 15:19:18 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:20.846240 | orchestrator | 2026-01-10 15:19:20 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:23.114823 | orchestrator | 2026-01-10 15:19:23 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:25.399263 | orchestrator | 2026-01-10 15:19:25 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:27.885233 | orchestrator | 2026-01-10 15:19:27 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:30.306897 | orchestrator | 2026-01-10 15:19:30 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:19:32.636425 | orchestrator | 2026-01-10 15:19:32 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) completed with status ACTIVE 2026-01-10 15:19:32.636513 | orchestrator | 2026-01-10 15:19:32 | INFO  | Live migrating server 03fe9a91-ef93-4ec8-8e85-710d07cf142e 2026-01-10 15:19:44.525279 | orchestrator | 2026-01-10 15:19:44 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:46.869859 | orchestrator | 2026-01-10 15:19:46 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:49.207521 | orchestrator | 2026-01-10 15:19:49 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:51.485584 | orchestrator | 2026-01-10 15:19:51 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:53.772962 | orchestrator | 2026-01-10 15:19:53 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:56.118496 | orchestrator | 2026-01-10 15:19:56 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:19:58.479337 | orchestrator | 2026-01-10 15:19:58 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:20:00.844914 | orchestrator | 2026-01-10 15:20:00 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:20:03.223940 | orchestrator | 2026-01-10 15:20:03 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:20:05.568394 | orchestrator | 2026-01-10 15:20:05 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:20:07.929070 | orchestrator | 2026-01-10 15:20:07 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) completed with status ACTIVE 2026-01-10 15:20:08.366220 | orchestrator | + compute_list 2026-01-10 15:20:08.366304 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:20:11.389339 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:11.389426 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:11.389434 | orchestrator | |------+--------+----------| 2026-01-10 15:20:11.389440 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:11.807218 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:20:15.285530 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:20:15.285608 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:15.285614 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:20:15.285619 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | 2026-01-10 15:20:15.285624 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | 2026-01-10 15:20:15.285628 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | 2026-01-10 15:20:15.285632 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | 2026-01-10 15:20:15.285636 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | 2026-01-10 15:20:15.285660 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:20:15.698930 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:20:18.622645 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:18.622735 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:18.622744 | orchestrator | |------+--------+----------| 2026-01-10 15:20:18.622751 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:19.026366 | orchestrator | + server_ping 2026-01-10 15:20:19.027253 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:20:19.027438 | orchestrator | ++ tr -d '\r' 2026-01-10 15:20:22.112679 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:22.112754 | orchestrator | + ping -c3 192.168.112.156 2026-01-10 15:20:22.124963 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-01-10 15:20:22.125035 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=8.04 ms 2026-01-10 15:20:23.121405 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=3.00 ms 2026-01-10 15:20:24.122435 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=2.32 ms 2026-01-10 15:20:24.122523 | orchestrator | 2026-01-10 15:20:24.122534 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-01-10 15:20:24.122542 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:24.122549 | orchestrator | rtt min/avg/max/mdev = 2.320/4.453/8.036/2.548 ms 2026-01-10 15:20:24.123233 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:24.123302 | orchestrator | + ping -c3 192.168.112.147 2026-01-10 15:20:24.132940 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2026-01-10 15:20:24.133032 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=4.99 ms 2026-01-10 15:20:25.131852 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.58 ms 2026-01-10 15:20:26.134140 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.05 ms 2026-01-10 15:20:26.134229 | orchestrator | 2026-01-10 15:20:26.134240 | orchestrator | --- 192.168.112.147 ping statistics --- 2026-01-10 15:20:26.134250 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:26.134257 | orchestrator | rtt min/avg/max/mdev = 2.051/3.207/4.991/1.279 ms 2026-01-10 15:20:26.134720 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:26.134734 | orchestrator | + ping -c3 192.168.112.170 2026-01-10 15:20:26.147624 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-01-10 15:20:26.147693 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=8.27 ms 2026-01-10 15:20:27.143391 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.25 ms 2026-01-10 15:20:28.145276 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=1.99 ms 2026-01-10 15:20:28.150503 | orchestrator | 2026-01-10 15:20:28.150561 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-01-10 15:20:28.150574 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:20:28.150584 | orchestrator | rtt min/avg/max/mdev = 1.994/4.171/8.271/2.900 ms 2026-01-10 15:20:28.150612 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:28.150629 | orchestrator | + ping -c3 192.168.112.169 2026-01-10 15:20:28.163142 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-01-10 15:20:28.163218 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=11.1 ms 2026-01-10 15:20:29.155989 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.71 ms 2026-01-10 15:20:30.157769 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.39 ms 2026-01-10 15:20:30.157856 | orchestrator | 2026-01-10 15:20:30.157865 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-01-10 15:20:30.157871 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:30.157877 | orchestrator | rtt min/avg/max/mdev = 2.388/5.407/11.130/4.048 ms 2026-01-10 15:20:30.158993 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:30.159059 | orchestrator | + ping -c3 192.168.112.171 2026-01-10 15:20:30.171648 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-01-10 15:20:30.171734 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=7.74 ms 2026-01-10 15:20:31.168289 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.43 ms 2026-01-10 15:20:32.170313 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.12 ms 2026-01-10 15:20:32.170379 | orchestrator | 2026-01-10 15:20:32.170386 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-01-10 15:20:32.170393 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:20:32.170397 | orchestrator | rtt min/avg/max/mdev = 2.121/4.096/7.736/2.576 ms 2026-01-10 15:20:32.170898 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-01-10 15:20:35.529377 | orchestrator | 2026-01-10 15:20:35 | INFO  | Live migrating server 13541841-1852-4b59-8b0b-7c2badb02c22 2026-01-10 15:20:45.766987 | orchestrator | 2026-01-10 15:20:45 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:48.130252 | orchestrator | 2026-01-10 15:20:48 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:50.471692 | orchestrator | 2026-01-10 15:20:50 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:52.827601 | orchestrator | 2026-01-10 15:20:52 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:55.127115 | orchestrator | 2026-01-10 15:20:55 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:57.430220 | orchestrator | 2026-01-10 15:20:57 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:20:59.689463 | orchestrator | 2026-01-10 15:20:59 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:21:02.043682 | orchestrator | 2026-01-10 15:21:02 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) is still in progress 2026-01-10 15:21:04.330430 | orchestrator | 2026-01-10 15:21:04 | INFO  | Live migration of 13541841-1852-4b59-8b0b-7c2badb02c22 (test-4) completed with status ACTIVE 2026-01-10 15:21:04.330513 | orchestrator | 2026-01-10 15:21:04 | INFO  | Live migrating server 9b61d686-360e-45c1-adc4-2f20f560bb6f 2026-01-10 15:21:14.124943 | orchestrator | 2026-01-10 15:21:14 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:16.484319 | orchestrator | 2026-01-10 15:21:16 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:18.817148 | orchestrator | 2026-01-10 15:21:18 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:21.128506 | orchestrator | 2026-01-10 15:21:21 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:23.410520 | orchestrator | 2026-01-10 15:21:23 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:25.802610 | orchestrator | 2026-01-10 15:21:25 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:28.202420 | orchestrator | 2026-01-10 15:21:28 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:30.504412 | orchestrator | 2026-01-10 15:21:30 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:32.882691 | orchestrator | 2026-01-10 15:21:32 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) is still in progress 2026-01-10 15:21:35.270952 | orchestrator | 2026-01-10 15:21:35 | INFO  | Live migration of 9b61d686-360e-45c1-adc4-2f20f560bb6f (test-3) completed with status ACTIVE 2026-01-10 15:21:35.271204 | orchestrator | 2026-01-10 15:21:35 | INFO  | Live migrating server e298516c-1be8-4521-8f5f-6bc75b56ab5a 2026-01-10 15:21:45.062509 | orchestrator | 2026-01-10 15:21:45 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:47.441001 | orchestrator | 2026-01-10 15:21:47 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:49.790146 | orchestrator | 2026-01-10 15:21:49 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:52.070272 | orchestrator | 2026-01-10 15:21:52 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:54.395768 | orchestrator | 2026-01-10 15:21:54 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:56.681086 | orchestrator | 2026-01-10 15:21:56 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:21:59.068993 | orchestrator | 2026-01-10 15:21:59 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:22:01.424891 | orchestrator | 2026-01-10 15:22:01 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) is still in progress 2026-01-10 15:22:03.727713 | orchestrator | 2026-01-10 15:22:03 | INFO  | Live migration of e298516c-1be8-4521-8f5f-6bc75b56ab5a (test-2) completed with status ACTIVE 2026-01-10 15:22:03.727828 | orchestrator | 2026-01-10 15:22:03 | INFO  | Live migrating server aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e 2026-01-10 15:22:14.056156 | orchestrator | 2026-01-10 15:22:14 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:16.441685 | orchestrator | 2026-01-10 15:22:16 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:18.795053 | orchestrator | 2026-01-10 15:22:18 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:21.067675 | orchestrator | 2026-01-10 15:22:21 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:23.438907 | orchestrator | 2026-01-10 15:22:23 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:25.714191 | orchestrator | 2026-01-10 15:22:25 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:28.004797 | orchestrator | 2026-01-10 15:22:28 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:30.371827 | orchestrator | 2026-01-10 15:22:30 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) is still in progress 2026-01-10 15:22:32.749426 | orchestrator | 2026-01-10 15:22:32 | INFO  | Live migration of aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e (test-1) completed with status ACTIVE 2026-01-10 15:22:32.749513 | orchestrator | 2026-01-10 15:22:32 | INFO  | Live migrating server 03fe9a91-ef93-4ec8-8e85-710d07cf142e 2026-01-10 15:22:43.013325 | orchestrator | 2026-01-10 15:22:43 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:45.378163 | orchestrator | 2026-01-10 15:22:45 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:47.721988 | orchestrator | 2026-01-10 15:22:47 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:50.031592 | orchestrator | 2026-01-10 15:22:50 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:52.369640 | orchestrator | 2026-01-10 15:22:52 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:54.621576 | orchestrator | 2026-01-10 15:22:54 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:56.915778 | orchestrator | 2026-01-10 15:22:56 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:22:59.410430 | orchestrator | 2026-01-10 15:22:59 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:23:01.692236 | orchestrator | 2026-01-10 15:23:01 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:23:03.997615 | orchestrator | 2026-01-10 15:23:03 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) is still in progress 2026-01-10 15:23:06.273843 | orchestrator | 2026-01-10 15:23:06 | INFO  | Live migration of 03fe9a91-ef93-4ec8-8e85-710d07cf142e (test) completed with status ACTIVE 2026-01-10 15:23:06.703244 | orchestrator | + compute_list 2026-01-10 15:23:06.703323 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:23:09.745912 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:09.746049 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:09.746060 | orchestrator | |------+--------+----------| 2026-01-10 15:23:09.746066 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:10.206780 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:23:13.280404 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:13.280483 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:13.280492 | orchestrator | |------+--------+----------| 2026-01-10 15:23:13.280499 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:13.716714 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:23:17.217116 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:23:17.217191 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:17.217198 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:23:17.217203 | orchestrator | | 13541841-1852-4b59-8b0b-7c2badb02c22 | test-4 | ACTIVE | 2026-01-10 15:23:17.217208 | orchestrator | | 9b61d686-360e-45c1-adc4-2f20f560bb6f | test-3 | ACTIVE | 2026-01-10 15:23:17.217213 | orchestrator | | e298516c-1be8-4521-8f5f-6bc75b56ab5a | test-2 | ACTIVE | 2026-01-10 15:23:17.217218 | orchestrator | | aaf4fb2a-fc8e-4f8d-8c6a-067003864a0e | test-1 | ACTIVE | 2026-01-10 15:23:17.217224 | orchestrator | | 03fe9a91-ef93-4ec8-8e85-710d07cf142e | test | ACTIVE | 2026-01-10 15:23:17.217229 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:23:17.636349 | orchestrator | + server_ping 2026-01-10 15:23:17.638628 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:23:17.639096 | orchestrator | ++ tr -d '\r' 2026-01-10 15:23:20.766515 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:20.766601 | orchestrator | + ping -c3 192.168.112.156 2026-01-10 15:23:20.777475 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-01-10 15:23:20.777556 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=6.45 ms 2026-01-10 15:23:21.775508 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.63 ms 2026-01-10 15:23:22.777326 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=2.18 ms 2026-01-10 15:23:22.777398 | orchestrator | 2026-01-10 15:23:22.777405 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-01-10 15:23:22.777411 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:23:22.777469 | orchestrator | rtt min/avg/max/mdev = 2.176/3.750/6.447/1.915 ms 2026-01-10 15:23:22.778144 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:22.778173 | orchestrator | + ping -c3 192.168.112.147 2026-01-10 15:23:22.790840 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2026-01-10 15:23:22.790924 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=8.31 ms 2026-01-10 15:23:23.788560 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.71 ms 2026-01-10 15:23:24.789242 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.99 ms 2026-01-10 15:23:24.789327 | orchestrator | 2026-01-10 15:23:24.789341 | orchestrator | --- 192.168.112.147 ping statistics --- 2026-01-10 15:23:24.789393 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:23:24.789404 | orchestrator | rtt min/avg/max/mdev = 1.994/4.339/8.310/2.823 ms 2026-01-10 15:23:24.790107 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:24.790166 | orchestrator | + ping -c3 192.168.112.170 2026-01-10 15:23:24.802618 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-01-10 15:23:24.802697 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=8.01 ms 2026-01-10 15:23:25.798522 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.26 ms 2026-01-10 15:23:26.800560 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=2.25 ms 2026-01-10 15:23:26.800746 | orchestrator | 2026-01-10 15:23:26.800774 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-01-10 15:23:26.800796 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:26.800814 | orchestrator | rtt min/avg/max/mdev = 2.249/4.172/8.007/2.711 ms 2026-01-10 15:23:26.801332 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:26.801406 | orchestrator | + ping -c3 192.168.112.169 2026-01-10 15:23:26.815508 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-01-10 15:23:26.815582 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=8.77 ms 2026-01-10 15:23:27.810837 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.83 ms 2026-01-10 15:23:28.811398 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.98 ms 2026-01-10 15:23:28.811518 | orchestrator | 2026-01-10 15:23:28.811543 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-01-10 15:23:28.811563 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:28.811580 | orchestrator | rtt min/avg/max/mdev = 1.978/4.527/8.773/3.022 ms 2026-01-10 15:23:28.812346 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:28.812412 | orchestrator | + ping -c3 192.168.112.171 2026-01-10 15:23:28.820891 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-01-10 15:23:28.821032 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=4.91 ms 2026-01-10 15:23:29.819751 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.33 ms 2026-01-10 15:23:30.821286 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.11 ms 2026-01-10 15:23:30.822209 | orchestrator | 2026-01-10 15:23:30.822262 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-01-10 15:23:30.822272 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:30.822280 | orchestrator | rtt min/avg/max/mdev = 2.106/3.116/4.912/1.272 ms 2026-01-10 15:23:30.930616 | orchestrator | ok: Runtime: 0:23:37.291135 2026-01-10 15:23:30.988921 | 2026-01-10 15:23:30.989110 | TASK [Run tempest] 2026-01-10 15:23:31.535688 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:31.559478 | 2026-01-10 15:23:31.559698 | TASK [Check prometheus alert status] 2026-01-10 15:23:32.100511 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:32.104262 | 2026-01-10 15:23:32.104420 | PLAY RECAP 2026-01-10 15:23:32.104529 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2026-01-10 15:23:32.104573 | 2026-01-10 15:23:32.353662 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-10 15:23:32.355038 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:23:33.232238 | 2026-01-10 15:23:33.232423 | PLAY [Post output play] 2026-01-10 15:23:33.248445 | 2026-01-10 15:23:33.248627 | LOOP [stage-output : Register sources] 2026-01-10 15:23:33.320160 | 2026-01-10 15:23:33.320564 | TASK [stage-output : Check sudo] 2026-01-10 15:23:34.226187 | orchestrator | sudo: a password is required 2026-01-10 15:23:34.367335 | orchestrator | ok: Runtime: 0:00:00.014204 2026-01-10 15:23:34.383276 | 2026-01-10 15:23:34.383463 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-10 15:23:34.437693 | 2026-01-10 15:23:34.438032 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-10 15:23:34.517796 | orchestrator | ok 2026-01-10 15:23:34.526770 | 2026-01-10 15:23:34.526941 | LOOP [stage-output : Ensure target folders exist] 2026-01-10 15:23:35.025329 | orchestrator | ok: "docs" 2026-01-10 15:23:35.025624 | 2026-01-10 15:23:35.314716 | orchestrator | ok: "artifacts" 2026-01-10 15:23:35.605972 | orchestrator | ok: "logs" 2026-01-10 15:23:35.632398 | 2026-01-10 15:23:35.632595 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-10 15:23:35.683293 | 2026-01-10 15:23:35.683598 | TASK [stage-output : Make all log files readable] 2026-01-10 15:23:35.994356 | orchestrator | ok 2026-01-10 15:23:36.005594 | 2026-01-10 15:23:36.005763 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-10 15:23:36.051782 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:36.067982 | 2026-01-10 15:23:36.068173 | TASK [stage-output : Discover log files for compression] 2026-01-10 15:23:36.093905 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:36.102026 | 2026-01-10 15:23:36.102301 | LOOP [stage-output : Archive everything from logs] 2026-01-10 15:23:36.145733 | 2026-01-10 15:23:36.145911 | PLAY [Post cleanup play] 2026-01-10 15:23:36.154716 | 2026-01-10 15:23:36.154859 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:23:36.216187 | orchestrator | ok 2026-01-10 15:23:36.225906 | 2026-01-10 15:23:36.226042 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:23:36.262826 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:36.282567 | 2026-01-10 15:23:36.282802 | TASK [Clean the cloud environment] 2026-01-10 15:23:36.939739 | orchestrator | 2026-01-10 15:23:36 - clean up servers 2026-01-10 15:23:37.709422 | orchestrator | 2026-01-10 15:23:37 - testbed-manager 2026-01-10 15:23:37.795902 | orchestrator | 2026-01-10 15:23:37 - testbed-node-5 2026-01-10 15:23:37.886425 | orchestrator | 2026-01-10 15:23:37 - testbed-node-0 2026-01-10 15:23:37.977056 | orchestrator | 2026-01-10 15:23:37 - testbed-node-2 2026-01-10 15:23:38.077014 | orchestrator | 2026-01-10 15:23:38 - testbed-node-1 2026-01-10 15:23:38.178272 | orchestrator | 2026-01-10 15:23:38 - testbed-node-3 2026-01-10 15:23:38.270400 | orchestrator | 2026-01-10 15:23:38 - testbed-node-4 2026-01-10 15:23:38.365147 | orchestrator | 2026-01-10 15:23:38 - clean up keypairs 2026-01-10 15:23:38.389040 | orchestrator | 2026-01-10 15:23:38 - testbed 2026-01-10 15:23:38.418514 | orchestrator | 2026-01-10 15:23:38 - wait for servers to be gone 2026-01-10 15:23:49.355364 | orchestrator | 2026-01-10 15:23:49 - clean up ports 2026-01-10 15:23:49.568768 | orchestrator | 2026-01-10 15:23:49 - 19e4fa68-284e-4fe0-844c-b3fe49195012 2026-01-10 15:23:49.930537 | orchestrator | 2026-01-10 15:23:49 - 4984e1bb-fc36-4d9e-a573-886e75c5eccf 2026-01-10 15:23:50.213045 | orchestrator | 2026-01-10 15:23:50 - 4deeee36-b5bd-4bbc-8bcd-89b4a6cae54b 2026-01-10 15:23:50.468371 | orchestrator | 2026-01-10 15:23:50 - 74a59b1d-2677-46d7-8908-bab13f025adf 2026-01-10 15:23:50.678277 | orchestrator | 2026-01-10 15:23:50 - 9f0d93f3-7356-4748-a507-9802b73b3be2 2026-01-10 15:23:50.920809 | orchestrator | 2026-01-10 15:23:50 - c1bebf96-dbef-4b1a-826e-cbbed58f3638 2026-01-10 15:23:51.762845 | orchestrator | 2026-01-10 15:23:51 - c3f16177-f18f-43d9-a589-89daf39a2c46 2026-01-10 15:23:51.966296 | orchestrator | 2026-01-10 15:23:51 - clean up volumes 2026-01-10 15:23:52.093833 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-2-node-base 2026-01-10 15:23:52.132268 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-3-node-base 2026-01-10 15:23:52.176920 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-5-node-base 2026-01-10 15:23:52.222347 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-4-node-base 2026-01-10 15:23:52.266052 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-0-node-base 2026-01-10 15:23:52.306862 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-1-node-base 2026-01-10 15:23:52.351363 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-8-node-5 2026-01-10 15:23:52.392531 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-manager-base 2026-01-10 15:23:52.436564 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-6-node-3 2026-01-10 15:23:52.481153 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-3-node-3 2026-01-10 15:23:52.526525 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-7-node-4 2026-01-10 15:23:52.570583 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-2-node-5 2026-01-10 15:23:52.613793 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-5-node-5 2026-01-10 15:23:52.659488 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-1-node-4 2026-01-10 15:23:52.705639 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-4-node-4 2026-01-10 15:23:52.751909 | orchestrator | 2026-01-10 15:23:52 - testbed-volume-0-node-3 2026-01-10 15:23:52.794654 | orchestrator | 2026-01-10 15:23:52 - disconnect routers 2026-01-10 15:23:53.451226 | orchestrator | 2026-01-10 15:23:53 - testbed 2026-01-10 15:23:54.365166 | orchestrator | 2026-01-10 15:23:54 - clean up subnets 2026-01-10 15:23:54.417627 | orchestrator | 2026-01-10 15:23:54 - subnet-testbed-management 2026-01-10 15:23:54.593454 | orchestrator | 2026-01-10 15:23:54 - clean up networks 2026-01-10 15:23:54.755272 | orchestrator | 2026-01-10 15:23:54 - net-testbed-management 2026-01-10 15:23:55.059318 | orchestrator | 2026-01-10 15:23:55 - clean up security groups 2026-01-10 15:23:55.100097 | orchestrator | 2026-01-10 15:23:55 - testbed-node 2026-01-10 15:23:55.205268 | orchestrator | 2026-01-10 15:23:55 - testbed-management 2026-01-10 15:23:55.316329 | orchestrator | 2026-01-10 15:23:55 - clean up floating ips 2026-01-10 15:23:55.352731 | orchestrator | 2026-01-10 15:23:55 - 81.163.193.62 2026-01-10 15:23:55.690241 | orchestrator | 2026-01-10 15:23:55 - clean up routers 2026-01-10 15:23:55.750178 | orchestrator | 2026-01-10 15:23:55 - testbed 2026-01-10 15:23:56.849815 | orchestrator | ok: Runtime: 0:00:20.124007 2026-01-10 15:23:56.854578 | 2026-01-10 15:23:56.854747 | PLAY RECAP 2026-01-10 15:23:56.854912 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-10 15:23:56.854982 | 2026-01-10 15:23:57.018445 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:23:57.021074 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:23:57.788665 | 2026-01-10 15:23:57.788843 | PLAY [Cleanup play] 2026-01-10 15:23:57.806640 | 2026-01-10 15:23:57.806799 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:23:57.859237 | orchestrator | ok 2026-01-10 15:23:57.867874 | 2026-01-10 15:23:57.868032 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:23:57.912891 | orchestrator | skipping: Conditional result was False 2026-01-10 15:23:57.933054 | 2026-01-10 15:23:57.933326 | TASK [Clean the cloud environment] 2026-01-10 15:23:59.292364 | orchestrator | 2026-01-10 15:23:59 - clean up servers 2026-01-10 15:23:59.767116 | orchestrator | 2026-01-10 15:23:59 - clean up keypairs 2026-01-10 15:23:59.784686 | orchestrator | 2026-01-10 15:23:59 - wait for servers to be gone 2026-01-10 15:23:59.825899 | orchestrator | 2026-01-10 15:23:59 - clean up ports 2026-01-10 15:23:59.908295 | orchestrator | 2026-01-10 15:23:59 - clean up volumes 2026-01-10 15:23:59.975206 | orchestrator | 2026-01-10 15:23:59 - disconnect routers 2026-01-10 15:23:59.998578 | orchestrator | 2026-01-10 15:23:59 - clean up subnets 2026-01-10 15:24:00.023349 | orchestrator | 2026-01-10 15:24:00 - clean up networks 2026-01-10 15:24:00.180054 | orchestrator | 2026-01-10 15:24:00 - clean up security groups 2026-01-10 15:24:00.217493 | orchestrator | 2026-01-10 15:24:00 - clean up floating ips 2026-01-10 15:24:00.245772 | orchestrator | 2026-01-10 15:24:00 - clean up routers 2026-01-10 15:24:00.479855 | orchestrator | ok: Runtime: 0:00:01.436341 2026-01-10 15:24:00.484327 | 2026-01-10 15:24:00.484496 | PLAY RECAP 2026-01-10 15:24:00.484624 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:24:00.484688 | 2026-01-10 15:24:00.624823 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:24:00.627270 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:24:01.400398 | 2026-01-10 15:24:01.400571 | PLAY [Base post-fetch] 2026-01-10 15:24:01.415850 | 2026-01-10 15:24:01.416019 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-10 15:24:01.481951 | orchestrator | skipping: Conditional result was False 2026-01-10 15:24:01.497449 | 2026-01-10 15:24:01.497704 | TASK [fetch-output : Set log path for single node] 2026-01-10 15:24:01.557983 | orchestrator | ok 2026-01-10 15:24:01.568604 | 2026-01-10 15:24:01.568772 | LOOP [fetch-output : Ensure local output dirs] 2026-01-10 15:24:02.081848 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/logs" 2026-01-10 15:24:02.374973 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/artifacts" 2026-01-10 15:24:02.660393 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20a7f85b13b844a5ba4884b904239a95/work/docs" 2026-01-10 15:24:02.683109 | 2026-01-10 15:24:02.683302 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-10 15:24:03.670911 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:24:03.671333 | orchestrator | changed: All items complete 2026-01-10 15:24:03.671402 | 2026-01-10 15:24:04.449495 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:24:05.205750 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:24:05.237024 | 2026-01-10 15:24:05.237199 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-10 15:24:05.803346 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.005599 2026-01-10 15:24:06.096441 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.012433 2026-01-10 15:24:06.109717 | 2026-01-10 15:24:06.109838 | PLAY RECAP 2026-01-10 15:24:06.109892 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:24:06.109919 | 2026-01-10 15:24:06.250156 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:24:06.252419 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:24:07.052381 | 2026-01-10 15:24:07.052809 | PLAY [Base post] 2026-01-10 15:24:07.084632 | 2026-01-10 15:24:07.084844 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-10 15:24:08.243660 | orchestrator | changed 2026-01-10 15:24:08.253610 | 2026-01-10 15:24:08.253752 | PLAY RECAP 2026-01-10 15:24:08.253824 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-10 15:24:08.253891 | 2026-01-10 15:24:08.388388 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:24:08.391320 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-10 15:24:09.252846 | 2026-01-10 15:24:09.253032 | PLAY [Base post-logs] 2026-01-10 15:24:09.266729 | 2026-01-10 15:24:09.267037 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-10 15:24:09.784433 | localhost | changed 2026-01-10 15:24:09.800317 | 2026-01-10 15:24:09.800518 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-10 15:24:09.840977 | localhost | ok 2026-01-10 15:24:09.848631 | 2026-01-10 15:24:09.848791 | TASK [Set zuul-log-path fact] 2026-01-10 15:24:09.867626 | localhost | ok 2026-01-10 15:24:09.884876 | 2026-01-10 15:24:09.885042 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 15:24:09.923923 | localhost | ok 2026-01-10 15:24:09.927718 | 2026-01-10 15:24:09.927830 | TASK [upload-logs : Create log directories] 2026-01-10 15:24:10.467239 | localhost | changed 2026-01-10 15:24:10.474980 | 2026-01-10 15:24:10.475307 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-10 15:24:11.030975 | localhost -> localhost | ok: Runtime: 0:00:00.006125 2026-01-10 15:24:11.037184 | 2026-01-10 15:24:11.037390 | TASK [upload-logs : Upload logs to log server] 2026-01-10 15:24:11.655108 | localhost | Output suppressed because no_log was given 2026-01-10 15:24:11.658607 | 2026-01-10 15:24:11.658763 | LOOP [upload-logs : Compress console log and json output] 2026-01-10 15:24:11.717713 | localhost | skipping: Conditional result was False 2026-01-10 15:24:11.722910 | localhost | skipping: Conditional result was False 2026-01-10 15:24:11.735244 | 2026-01-10 15:24:11.735657 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-10 15:24:11.789420 | localhost | skipping: Conditional result was False 2026-01-10 15:24:11.790081 | 2026-01-10 15:24:11.793887 | localhost | skipping: Conditional result was False 2026-01-10 15:24:11.802811 | 2026-01-10 15:24:11.803085 | LOOP [upload-logs : Upload console log and json output]